I was concerned about what blind spots I might have with the way I run my business. For example I am Australian and Australian's are usually very informal, even in a professional setting - was my communication with international clients too informal?
To try and address these concerns I developed a feedback survey with Google Docs, which I have been (politely) requesting my clients to complete at the end of a job. The results have been helpful, and it also seems to have impressed clients that I wanted their feedback. Wish I had thought of this earlier!
Tuesday, September 7, 2010
Saturday, August 28, 2010
Why reinvent the wheel?
I have been asked a few times why I chose to reinvent the wheel when libraries such as Scrapy and lxml already exist.
I am aware of these libraries and have used them in the past with good results. However my current work involves building relatively simple web scraping scripts that I want to run without hassle on the clients machine. This rules out installing full frameworks such as Scrapy or compiling C based libraries such as lxml - I need a pure Python solution. This also gives me the flexibility to run the script on Google App Engine.
To scrape webpages there are generally two stages: parse the HTML and then select the relevant nodes.
The most well known Python HTML parser seems to be BeautifulSoup, however I find it slow, difficult to use (compared to XPath), often parses HTML inaccurately, and significantly - the author has lost interest in further developing it. So I would not recommend using it - instead go with html5lib.
To select HTML content I use XPath. Is there a decent pure Python XPath solution? I didn't find one 6 months ago when I needed it so developed this simple version that covers my typical use cases. I would deprecate this in future if a decent solution does come along, but for now I am happy with my pure Python infrastructure.
I am aware of these libraries and have used them in the past with good results. However my current work involves building relatively simple web scraping scripts that I want to run without hassle on the clients machine. This rules out installing full frameworks such as Scrapy or compiling C based libraries such as lxml - I need a pure Python solution. This also gives me the flexibility to run the script on Google App Engine.
To scrape webpages there are generally two stages: parse the HTML and then select the relevant nodes.
The most well known Python HTML parser seems to be BeautifulSoup, however I find it slow, difficult to use (compared to XPath), often parses HTML inaccurately, and significantly - the author has lost interest in further developing it. So I would not recommend using it - instead go with html5lib.
To select HTML content I use XPath. Is there a decent pure Python XPath solution? I didn't find one 6 months ago when I needed it so developed this simple version that covers my typical use cases. I would deprecate this in future if a decent solution does come along, but for now I am happy with my pure Python infrastructure.
Friday, August 20, 2010
Best website for freelancers
When I started freelancing I tried competing for as much work as possible by creating accounts on every freelance site I could find: oDesk, guru, scriptlance, and many others. However to my surprise I got almost all my work from just one source: Elance. How is Elance different?
With most freelancing sites you create an account and then can start bidding for jobs straight away. There is generally no cost to bidding so freelancers tend to bid on projects even if they don't have the skills or time to complete it. This is obviously frustrating for clients who are going to waste a lot of time sifting through bids.
However with Elance there is a high barrier to entry: you have to pass a test, receive a phone call to confirm your identity, and pay money for each job you bid on. Often I see jobs on Elance with no bids because it requires obscure experience - people weren't willing to waste their money bidding for a job they can't do. This barrier serves to weed out some of the less serious workers so that the average bid is of higher quality.
From my experience the clients are different on Elance too. On most freelancing sites the client is trying to get the job done for the smallest amount of money possible and so are often willing to spend their time sifting through dozens of proposals, hoping to get lucky. Elance seems to attract clients who consider their time as valuable and are willing to pay a premium for good service.
Often clients contact me directly through Elance because I am native English and want to avoid potential communication problems. One client even requested me to double my bid because "we are not cheap!"
After a year of freelancing I now get the majority of work directly through my website, but still get a decent percentage of clients through Elance.
My advice for new freelancers - focus on building your Elance profile and don't waste your time with the others. (Though do let me know if you have had good experience elsewhere.)
With most freelancing sites you create an account and then can start bidding for jobs straight away. There is generally no cost to bidding so freelancers tend to bid on projects even if they don't have the skills or time to complete it. This is obviously frustrating for clients who are going to waste a lot of time sifting through bids.
However with Elance there is a high barrier to entry: you have to pass a test, receive a phone call to confirm your identity, and pay money for each job you bid on. Often I see jobs on Elance with no bids because it requires obscure experience - people weren't willing to waste their money bidding for a job they can't do. This barrier serves to weed out some of the less serious workers so that the average bid is of higher quality.
From my experience the clients are different on Elance too. On most freelancing sites the client is trying to get the job done for the smallest amount of money possible and so are often willing to spend their time sifting through dozens of proposals, hoping to get lucky. Elance seems to attract clients who consider their time as valuable and are willing to pay a premium for good service.
Often clients contact me directly through Elance because I am native English and want to avoid potential communication problems. One client even requested me to double my bid because "we are not cheap!"
After a year of freelancing I now get the majority of work directly through my website, but still get a decent percentage of clients through Elance.
My advice for new freelancers - focus on building your Elance profile and don't waste your time with the others. (Though do let me know if you have had good experience elsewhere.)
Sunday, July 25, 2010
All your data are belong to us?
Regarding the title of this blog "All your data are belong to us" - I realized not everyone get the reference. See this wikipedia article for an explanation.
Saturday, July 10, 2010
Caching crawled webpages
When crawling a website I store the HTML in a local cache so if I need to rescrape the website later I can load the webpages quickly from my local cache and avoid extra load on their website server. This is often necessary when a client realizes they require additional features scraped.
I built the pdict library to manage my cache. Pdict provides a dictionary like interface but stores the data in a sqlite database on disk rather than in memory. All data is automatically compressed (using zlib) before writing and decompressed after reading. Both zlib and sqlite3 come builtin with Python (2.5+) so there are no external dependencies.
Here is some example usage of pdict:
I built the pdict library to manage my cache. Pdict provides a dictionary like interface but stores the data in a sqlite database on disk rather than in memory. All data is automatically compressed (using zlib) before writing and decompressed after reading. Both zlib and sqlite3 come builtin with Python (2.5+) so there are no external dependencies.
Here is some example usage of pdict:
>>> from webscraping.pdict import PersistentDict >>> cache = PersistentDict(CACHE_FILE) >>> cache[url1] = html1 >>> cache[url2] = html2 >>> url1 in cache True >>> cache[url1] html1 >>> cache.keys() [url1, url2] >>> del cache[url1] >>> url1 in cache False
Thursday, July 1, 2010
Fixed fee or hourly?
I prefer to quote per project rather than per hour for my web scraping work because it:
- gives me incentive to increase my efficiency (by improving my infrastructure)
- gives the client security about the total cost
- avoids distrust about the number of hours actually worked
- makes me look more competitive compared to the hourly rates available in Asia and Eastern Europe
- is difficult to track time fairly when working on two or more projects simultaneously
- is easy to estimate complexity based on past experience, atleast compared to building websites
- involves less administration
Saturday, June 12, 2010
Open sourced web scraping code
For most scraping jobs I use the same general approach of crawling, selecting the appropriate nodes, and then saving the results. Consequently I reuse a lot of code across projects, which I have now combined into a library. Most of this infrastructure is available open sourced on Google Code.
The code in that repository is licensed under the LGPL, which means you are free to use it in your own applications (including commercial) but are obliged to release any changes you make. This is different than the more popular GPL license, which would make the library unusable in most commercial projects. And it is also different than the BSD and WTFPL style licenses, which would let people do whatever they want with the library including making changes and not releasing them.
I think the LGPL is a good balance for libraries because it lets anyone use the code while everyone can benefit from improvements made by individual users.
The code in that repository is licensed under the LGPL, which means you are free to use it in your own applications (including commercial) but are obliged to release any changes you make. This is different than the more popular GPL license, which would make the library unusable in most commercial projects. And it is also different than the BSD and WTFPL style licenses, which would let people do whatever they want with the library including making changes and not releasing them.
I think the LGPL is a good balance for libraries because it lets anyone use the code while everyone can benefit from improvements made by individual users.
Subscribe to:
Posts (Atom)