Computer Vision meets Fish Tank

One day I got curious… what if I programmed my computer to track the fish swimming in my fish tank? That led me to tinkering with an open source software library called OpenCV. I fiddled around with the settings, tried a few things, and saved the output as a video, seen below. There’s a lot of research in computer science around object recognition and identification … this mini-project was just an attempt to have some fun poking around with some “older” computer vision technologies. Let me know what you think!

 

Advertisements

Python API to AqAdvisor.com

Context

Approximately 10% of American households have fish as pets.
It is estimated that 95% of fish deaths can be attributed to improper housing or nutrition. Many times fish are sold or given away without any guidance to the new pet owner, such as goldfish giveaways at carnivals or at birthdays. Some fish have myths associated with them, such as the betta fish (siamese fighting fish) that supposedly can live in dirty water in small bowls.

AqAdvisor.com is a website that helps aquarists plan how to stock their fish tank. Users specify their tank size, their filtration, and what fish they intend to keep in the tank. The site will calculate the stocking level and filtration capacity given the inputs. This is a useful tool to get a rough estimate on a fish tank’s stocking level, it even lets you know whether the fish are compatible with one another, if you have more than one species in the tank. AqAdvisor is sometimes criticized for “not being accurate”, so the output generated should be not be treated as gospel; nonetheless, it gives a reasonable starting point, and is generally very useful for beginner fishkeepers.

Why I created this tool

I started using AqAdvisor and got annoyed at the archaic design. It’s not a RESTful API, it’s a clunky web site that takes a while to load. I was doing lots of research and found myself wanting a better useful experience. I also had some free time on my hands one long holiday weekend so I decided to give myself a little programming exercise of creating a python API to the site.

How to use the tool

The easiest way to use the tool is to use the ipython notebook as a starting point. First, create a stocking, then a tank, and then make a call to the AqAdvisor service. Because of the clunky web interface, multiple calls to AqAdvisor.com must be made if you want to have more than one fish species in a tank (as is would be the case for a community tank). The auto-generated AqAdvisor URL will be printed for each call out to the website. This is useful in case you want to jump over to the web UI, you can just copy and paste the URL into your web browser and continue from there.

Use the common (English) name for the fish you are looking for. PyAqAdvisor will do a “fuzzy match” to AqAdvisor’s species list and match the closet one. This way you can specify your stocking list as “cardinal tetra” and not worry about the scientic name.

Please look at examples/example.py and examples/example.ipynb for more information.

Here’s an example of how easy it use the new API:

from pyaqadvisor import Tank, Stocking

if __name__ == '__main__':

  stocking = Stocking().add('cardinal tetra', 5)\
   .add('panda cory', 6)\
   .add('lemon_tetra', 12)\
   .add('pearl gourami', 4)

  print "My user-specified stocking is: ", stocking
  print "I translate this into: ", stocking.aqadvisor_stock_list

  t = Tank('55g').add_filter("AquaClear 30").add_stocking(stocking)
  print "Aqadvisor tells me: ",
  print t.get_stocking_level()

Github Repo: PyAqAdvisor

Note

  • PyAqAdvisor currently only works for freshwater fish species. If you are interested in saltwater fish, please contact me.

Generate heart rate charts from MapMyRide TCX files

So I had some free time over Columbus Day weekend and figured why not spend it on a fun programming project. My politically-incorrectly named GhettoTCX project emerged after some quick fussing around with TCX (XML) file.

Ghetto TCX

GhettoTCX will parse a TCX file from Garmin, MapMyRide, etc. and generate some basic plots. The most interesting plot type is the heart rate zone chart. It can create a panel of plots, by parsing all the filed in a given directory.

It’s called GhettoTCX because it’s a no-frills, nothing fancy, not even a true TCX file parser. It simply searches for some keywords and pulls out heartbeat info and lat/long data. And not even at the same time, you need to the read the file twice if you want to plot both.

Heart Rate plots
Heart Rate plots

The example code and python code repository can be found on the project’s github page.

There are “better” TCX/XML file parsers out there. This one was meant to do one thing (actually two things), quickly and easily: plot heart rate (and heart rate zones). It can also plot lat/long data points onto a scatterplot, but it is seriously no-frills when you can get nice google maps charts on MapMyRide and practically any other fitness app out there.

It started out (and ended) as a fun weekend programming project… if you are curious about your heart rate zone, and are too cheap cost-conscious to pay the monthly subscription fee to MapMyRide for the heart rate zone chart, you can use this free tool instead. Enjoy!

Map Reduce is dead, long live Spark!

Map Reduce is dead, long live Spark!

That’s the impression I, and I think most people attending the conference, walked away with after Strata NY 2014.  Most of the interesting presentations were centered on Spark.  Only corporate IT presentations about “in progress hadoop implementations” were about Map Reduce.

So who’s working on Spark?  Cool startups and vendors (preparing for enterprise IT departments to move on to Spark in a year or two).

Who’s working on Map Reduce? Corporate IT departments migrating off legacy BI systems onto the promised land of Hadoop (dream come true, or nightmare around the corner, not sure which one it will be for people).

It makes sense. Map Reduce has been tested and is ‘safe’ now for enterprise IT teams to start deploying it into production systems.  Spark is still very new and untested.  Too risky for a Fortune 500 to dive into replacing legacy systems with a still-in-diapers open source software “solution.”  Nonetheless, I am sure every technical worker will be drooling to “prototype” or create proof of concepts with Spark after this conference.

Reflections on Strata NYC 2014

I had a chance to attend Strata in New York back in October.  I had been wanting to attend Strata for a few years, but had not had a chance until now.  A few impressions: (in the form of brief bullets)

  • It’s huge! (Over 3,000 attendees)
  • Very corporate!  (A bit too corporate, too stuffy, seemed like legal departments censored some presentations)
  • All the cool kids are using/learning Spark (and Scala)
  • Map Reduce is old news.
  • Enterprises move slow like dinosaurs, are just figuring out what Map Reduce is
  • Way too many vendors
  • Not enough interesting/inspiring presentations

Those were just my impressions, others may have other opinions.

Pentaho and Vertica as Business Intelligence / Data Warehousing solution

Introduction

I recently wrapped up a BI/Data Warehouse implementation project where I was responsible for helping a rapidly expanding international e-commerce company replace their aging BI reporting tool with a new, more flexible solution. The old BI reporting tool was based on a “in-memory” reporting engine, was more of a “departmental solution” than an enterprise-grade one, and was not optimally designed. For example, users found themselves downloading data from different canned reports to Excel where they ran VLOOKUPs and pivot tables to compute simple metrics such as average order value and average unit retail. Needless to say despite best of intentions, there had been a communication gap between business users and IT developers on reporting requirements during the implementation of the original BI tool.

In designing and implementing the new solutions, I set the following strategic tenants / guiding principles:

  • leverage commercial off-the-shelf (COTS) software; minimize customization and emphasize configuration instead (i.e., chose to buy instead of build, and made sure to not to build too much after buying)
  • involve all stakeholders and business users throughout process
  • enable business users to use self-service BI tools as much as possible
  • train as needed; up-skilling user base on self-service tools is better than hiring army of BI analysts
  • leverage data warehouse for both internal and external reporting
  • minimize amount of aggregation in Data Warehouse (we did almost no aggregation)
  • maximize the processing power of the ROLAP engine by pairing it with a high-performance analytical database (i.e., columnar MPP database)
  • stick to Kimball data warehouse design approach as much as possible, but be pragmatic where needed; Star Schema, Star Schema, Star Schema! (no snowflakes here)
  • take an iterative approach where possible – need to “ship” on time – understand that 1st release will not be “perfect” but does need to meet business requirements
  • for external reporting, provide canned reports only initially; test user adoption and work with clients to understand and address reporting needs over time

We looked at traditional players, open source, emerging technologies, and Cloud BI SaaS providers. I made sure business and IT stakeholders were part of the vendor selection process, ensuring they attended demos and vendor presentations. In the end, Pentaho best matched all our needs, providing us with both a solid ETL and BI reporting engines. Since we looking at providing both internal and external reporting with this solution, traditional BI vendors were prohibitively expensive, and “cloud offerings” were not compatible with our current IT capabilities and architecture (our data was not in the cloud).

Solution Description – Vertica + Pentaho BI/PDI

I proposed and received approval from our senior management and company board of directors to use Pentaho and Vertica as our Business Intelligence (BI) / Data Warehouse (DW) solution.

Vertica

HP Vertica is a columnar MPP database that is 20-100 times faster than Oracle. HP Vertica is available in a Community Edition; allowing organizations to use all the features of the database for free for data up to 1TB on three nodes. You can also install the database on a single node, though for a true proof of concept, you should get at least 3 nodes. We started using Vertica 6.1 Community Edition for proof of concept (POC) and then later upgraded to an enterprise license when we went live in production.

Pentaho

Pentaho is an open source BI platform and ETL tool. I liked the fact that it was open source; allowing us to highly customize the BI implementation if we chose to, as well as develop our own ETL connectors and routines. Some of the client tools are a bit quirky, but I do not what BI/ETL software isn’t, given the complexity of these tools. Overall the product is solid and delivers as expected. We got the enterprise edition for the additional features and product support from Pentaho. One thing that is annoying, is all the configuration files that are spread all over the place. To be fair, this is probably more of a Java application configuration issue, than a Pentaho issue.

When I tell people that I’m using Pentaho, they are usually surprised; then I find out they were using Pentaho 3.x and then I’m not surprised by their reaction. Pentaho 4.x is a big step up from previous major releases, and Pentaho 5.0 is looking really good (I like their UI redesign). I encourage anyone who looked at an early version of Pentaho to take another look. The product has matured and is worth another look.

When I was selecting a BI vendor, the thought “no one ever got fired for choosing IBM (Cognos)” crossed my mind. I could have gone the “safe” route and used one of these other tools. However, I believe the combination of Vertica + Pentaho has delivered more value to the organization in a shorter amount of time that it would have been for us to realize with these other vendors. For our organization, for our business needs, and for the realities of our IT capabilities at the time, Pentaho + Vertica was the way to go. We delivered the project on time and within budget (and without astronomical first-year costs). We have 100% user adoption internally, and are getting very positive feedback from our merchant clients.

Results

  • Recognized by CEO for on-time, on-budget implementation; received “A” grade on end-of-year Enterprise-wide Strategic Initiatives Scorecard
  • Excellent user adoption
  • Positive feedback from external clients
  • Reduced manual reporting tasks over 50% (and over 80% in certain departments)