Tuesday, August 6, 2013

7 Reasons You Should Use MongoDB over DynamoDB

Even recently I migrated from MongoDB to DynamoDB, and shared 3 reason to use DynamoDB. I still love MongoDB, really good NoSQL solution. Here are some points for you to make decision on using MongoDB over DynamoDB.

Reason 1: Use MongoDB if your indexing fields might be altered later.
With DynamoDB, it's NOT possible to alter indexing after being created. I have to admit that there are workarounds. For example, you can create a new table and import data from the old one. But no one is straightforward and you need some trade off if using workaround. Back to indexing, DynamoDB allows you define a hash key to make the data well-distributed, and then adding range key and secondary index. When query from table, hash key must be used, and then either range key or one of secondary indices. No complex query supported. The hash key, range key and secondary index key definition can NOT be changed in future. So your database structure must be well designed before going production. By the way, the secondary key will occupy additional storage. If you have 1G data, and if you create index and "project" all attribute to the index, then your actually cost of storage will be 2G data. If you project only the hash and range key value to index, then you need to query twice to get the whole record. Actually the API allows you to invoke query only once, but the cost to "read" capacity is twice. In addition, you can still "scan" the data and filter by conditions on un-indexed key, but please check the data in my previous post, scan could be 100 times (or more) slow than query.

Reason 2: Use MongoDB if your need features of document database as your NoSQL solution.
If you will save document like this:
{
  _id: 1,
  name: { first: 'John', last: 'Backus' },
  birth: new Date('Dec 03, 1924'),
  death: new Date('Mar 17, 2007'),
  contribs: [ 'Fortran', 'ALGOL', 'Backus-Naur Form', 'FP' ],
  awards: [{
      award: 'National Medal of Science',
      year: 1975,
      by: 'National Science Foundation'
  }, {
      award: 'Turing Award',
      year: 1977,
      by: 'ACM'
  }]
} 
(sample document from MongoDB technical document)
With document database, you'll be able to query by name.first, or if some value exists in sub-document of awards. However, DynamoDB is key-value database, and support only value or set, no sub-document supported, and no complex index and query supported. It's not possible to save sub-document { first: 'John', last: 'Backus' } to name, accordingly, not possible to query by name.first.

Reason 3: Use MongoDB if you are going to use Perl, Erlang, or C++.
Official AWS SDK support Java, JavaScript, Ruby, PHP, Python, and .NET, while MongoDB supports more. I used node.js to build my backend server, both AWS SDK for node.js and mongoose SDK for MongoDB works very well. It's really amazing to use mongoose for MongoDB. It's in active development and the defect I report to mongoose can be fixed soon. I have also experience of using AWS SDK for Java and morphia for MongoDB, both of them works perfect! SDK for AWS and MongoDB are all well designed and widely used. But if your programing language is not listed in official support list, you may need to evaluate the quality of the SDK carefully. I have ever used non-official Java SDK for AWS SimpleDB, it's also good. But I can still easily get defect, for example, when using Boolean in object persistence modal, the Java SDK for SimpleDB cannot handle this type and will introduce some bad result.

Reason 4: Use MongoDB if you may exceed the limits of DynamoDB.
Please be careful about the limits and read them carefully if you are evaluating DynamoDB. You may easy to exceed some of the limits. For example, the value you stored in an item(value of a key) cannot exceed 64k bytes. It's easy to exceed 64k bytes when you allow user to input content. User may input a 100k bytes text as article title just because of pasting it by mistake. There is also workaround. I divide the content to multiple keys if it exceed the limits, and aggregate to one key in the post processing stage after reading the data from DynamoDB server. For example, the content of an article database may exceed 64k bytes, then in the pre-processing stage when storing to DynamoDB, I divide it to article.content0, article.content1, article.content2 and so on. After reading from DynamoDB, I will check if keys article.content0 exists, and if article.content0 exists, then continue to check article.content1, and combine the value in these fields to article.content and remove the article.content0, article.content1, and so on. This will introduce the complexity of your code and introduce additional dependency to your code. MongoDB does not have these limitations.

Reason 5: Use MongoDB if you are going to have data type other than string, number, and base 64 encoded binary.
In addition to string, number, binary, and array, MongoDB supports date, boolean, and a MongoDB specified type "Object ID". I use mongoose.js, and it supports these data type well. When you define data structure for object mapping, you can specify the correct type. Date and Boolean are quite important types. With DynamoDB you can use number as alternative, but still, need additional logic in your code to handle them. With MongoDB you can get all these data types by nature.

Reason 6: Use MongoDB if you are going to query by regular expression.
RegEx query might be an edge case, but in case this happens in your situation. DynamoDB provided a way to query by checking if a string or binary start with some substring, and provided the "CONTAINS" and "NOT_CONTAINS" filter when you do "scan". But you know "scan" is quite slow. With MongoDB, you can query easily on any key or sub document with RegEx, for example, if you want to query by user's name for "John" or "john", you can query by a simple regular expression {"name" => qr/[Jj]ohn/}, while this cannot be completed in DynamoDB by 1 query.

Reason 7: Use MongoDB if you are a big funs of document database.
10gen is the company backing MongoDB. They are very active on community. I asked question on stackoverflow, and Dylan, a Solution Architect of MongoDB, actively follows up my question, helped me analyze the issue, looked for the cause, also gave some very good suggestions on MongoDB. This is really a very good experience. In addition, the MongoDB community are willing to listen to users. Amazon is big company, it's not easy to getting touch with the people inside, not to mention impacting their decision and roadmap.

Bonus Tips: Read carefully on DynamoDB document if you are going to use it.
For example, there is an API "batchWriteItem". This API may return no error but give a field with key "UnprocessedItems" in result. This is somewhat anti-pattern. When I invoke a call, the result could be either success or failed. But this API gives a different status: "partial correct". You need to manually re-submit those "UnprocessedItems" again until there is no item in it. I didn't notice this because it's never happens during the testing. However, when there are big traffic, and the count of request to DynamoDB exceeded your quote for several seconds, this may happen.

Hold on, before you made the decision on using MongoDB, please read 3 Reasons You Should Use DynamoDB over MongoDB.

3 Reasons You Should Use DynamoDB over MongoDB

Recently I post a blog to share my experience of migrating from MongoDB to DynamoDB. Migration is smooth, and here are a summary of 7 reason we did the migration:

Reason 1: Use DynamoDB if you are NOT going to have an employee to manage the database servers. 
This is the top 1 reason I migrate from MongoDB to DynamoDB. We are launching a startup, and we have a long list of user requirements from early adoption users. We want to make them satisfied. I need to develop the Windows/Mac OS/Ubuntu software and iPhone/Android apps, also need to work on server to provide data synchronization among these apps. Kelly is not a technically people and didn't have experience on managing servers. Someone may said that people can be a web developer with 21 days. However, that's really not easy for server troubleshooting. With only 15k users and 1.4 million records, I start to get into serious troubles. From the last post,  the more data I stored, the more worse the database latency. In future when I set sharding and replica set for shardings, I can imagine that database management may take a big portion of my time in future. With DynamoDB, you can totally avoid any database management stuff. AWS manages it very well. I've migrated the database for one week, everything works very well.

Reason 2: Use DynamoDB if you didn't have budget for dedicated database servers.
Because I didn't have too much traffic and data records. I used 2 linode VPS as database servers, 1G RAM, 24G disk. The 2 database server is grouped as replica set, and no sharding yet. Ideally they should support my current data scale very well. However it's not true. Upgrading database servers will take more cost, and may still not be able to resolve the issue. There are some managed MongoDB services, but I may not be able to stand for the cost. With current user base, the MongoDB database occupied 8G disk on data and 2G disk on journal file. With managed mongodb service, I need to select 25G plan and starting from US$500 monthly fee. If I got more traffic and users, it would cost too much. Before migration, I tested on DynamoDB, migrating all the data to DynamoDB, that is, 1.4 million records. The actually space is less than 300M. I'm not sure how managed mongodb service, I use command in mongo console to get the disk usage statistics. My first week of cost on DynamoDB is, US$0.05. That's the last week of July, let's see how much it will cost in August.

Reason 3: Use DynamoDB if you are going to integrate with other Amazon Web Services.
For example, full text index of the database. There are solutions for MongoDB, but you need to setup additional servers for indexing and search, and understand the system. The good thing is that MongoDB provided full text index, but I can imagine that full text index for multiple languages is not easy, especially the Chinese word segmentation. Amazon CloudSearch is a solution for DynamoDB full text index. Another example could be AWS Elastic MapReduce, it can be integrated with DynamoDB very easily. Also for database backup and restore, Amazon has other services to integrate with DynamoDB. In my opinion, as the major NoSQL database in Amazon Web Services, DynamoDB will have more and more features, and you can speed up development and reduce the cost of server management by integrating Amazon Web Services.

However, DynamoDB has it's shortcomings. Before you made the decision on using DynamoDB, please read 7 Reasons You Should Use MongoDB over DynamoDB.

Monday, July 29, 2013

LEAN7: Migrate from MongoDB to AWS DynamoDB + SimpleDB

Migrate from MongoDB to DynamoDB + SimpleDB: New Server Side Architecture Ready for More Users

Recently we have 14,000 registered users, a small portion of them are paid users. I feel that TeamViz is recognized for more and more sales (even still a very small number) generated every month. However, I start to get trouble on our server architecture mentioned in this post. The issue is, the MongoDB backed database getting locked for unknown reason for several minutes every 2 hours. Initially, the all request will be hold for 2 minutes every 2 hours 7 minutes. Now it becomes more worse, all request will be hold for 7 minutes every 2 hours and 7 minutes. I asked this question on stackoverflow, but no answer yet. So I can either increase the capacity of servers, or shift to another database server. We are small, and I can try different solutions.

Because all the connection will be hold for several minutes, so the connection on load balancer looks like this way. (At the beginning I though the server are attacked, but no one will attack a sever every 2 hours 7 minutes, and for 1 month, right ^_^ )


So here are several possible solutions. Use another NoSQL database, or use managed NoSQL database. My first decision is to looking for other NoSQL database servers, I have read comparison of NoSQL solutionsthis link about NoSQL benchmark, and this link about couchbase. Every NoSQL database has some pros and cons.

I then talked with Kelly about the cost of server, cost of managed service, possibility of shifting to other NoSQL providers, or even shifting to MySQL. The conclusion is, current issue on MongoDB is just a start, we may take more time on managing databases and resolve performance, or some unknown issues. This will cost much energy. However, our focus is to providing better product. There are a lot of fun on playing NoSQL and other cutting edge technology. But that's not our goal. Shifting to managed database service can help us focusing on providing features/fix issues on product itself. At least we have a long list of features and issues to resolve. So we shifted to Amazon AWS DynamoDB, and to reduce the cost, part of the data on AWS SimpleDB. The server side is almost rewrote to handle the database change. I take this chance to practiced Promise pattern on node.js. It works great! and leveraged the middleware technology provided by Express framework. In addition, hold data of DynamoDB and SimpleDB in memcache. Everything has worked great for 24 hours (except that I got some error logs on memcache).

Here are the picture after 10 hours of migration. The huge periodically traffic disappeared.

Here are the new architecture on database and sync server.

You may have concern about accessing AWS from Linode, currently it's fine. We have more than 1.3 million items in one DynamoDB table, and response from DynamoDB to get one record by key is 25 ~ 45 ms from Linode network. SimpleDB has less than 20k items, and also 25 ~ 45 ms.

Some notes about the new architecture:
- Why Linode: much cheaper than AWS EC2.
- Why AWS DynamoDB and SimpleDB: don't want to worry about managing database.
- memcached suppose to work independently, we use CouchBase because they provided automatic clustering.
- Still, the design goal is to scale out. Every machine is independent. We can add more sync server and memcached server independently.
- Future plan: currently we still need a message queue, AWS SQS does not provide a way for post event to multiple subscribers simultaneously. RabbitMQ can make it. But message queue is not urgent so far.
- Future blog: I will share more experience on using SimpleDB and DynamoDB.

Sunday, July 14, 2013

LEAN6: 3 Reasons Not to Do an Unnecessary SDK Upgrade

3 Reasons Not to Do an Unnecessary SDK Upgrade

I used ExtJS to build my productivity tool TeamViz. Recently ExtJS release 4.2.1 while I'm still using 4.1.1a. After checked the release notes of 4.2.1, I'm excited to see some fixes and performance improvement. So I decided to make an upgrade. I read the upgrade guide from 4.1 to 4.2, and estimated that it should be completed within 1 hour. However, actually I spent 2 days on it. Here I share more details about the items happened in this upgrade.

  • Dependency Tools. My project is generated using Sencha Cmd, it can help generate an initial framework based on Ext JS so you can start your work quickly. Firstly I replaced the library with ExtJS 4.2, and it works well. But when I use sencha cmd to compile the project. Errors happened. Some changes happened in ExtJS 4.2 framework, just replacing the JS/CSS/Resource files does not work. Sencha Cmd rely on some auto-generated config file. So I decided to upgrade Sencha Cmd from 3.0 to 3.1 also. Also generated project again using command sencha -sdk ~/ext-4.2.1.883 generate app TeamViz ./TeamViz, and then replacing files based on the generated sample project. Later when I compile on Ubuntu 32Bit and 64Bit machine, and Windows, I also need to upgrade toolset for Sencha Cmd.
  • Fixes or Regressions. Every time when a new version of apps/sdk released, there must be some regressions or fixes. After the upgrade, I got some issues on mouse enter/leave events. My instant tools on items are broken. It works in a normal case, but broken on some special scenarios. After dig into the code of Ext JS 4.2, I found it's a regression of Ext JS 4.2, and make some workaround to resolve it. The workaround could be technical debt for future release, but it's the most efficient way to resolve it so far.
  • Undocumented API. When I implemented my complicated drag & drop in my app, I used undocumented api, actually injected some code in the drag & drop process of Ext JS. When I upgraded to ExtJS 4.2, the hacked part has been changed. I need to do a full test to find it, then to resolve it. I think there might some other potential issues but not find so far.
Actually the upgrade is not necessary, there is no bug report directly related the SDK, and the existing version works very well. For a startup, that everyday is important, it's may not be necessary comparing the risk and benefit of upgrading.

Wednesday, July 10, 2013

SDK to Sync Tasks: Dropbox vs Evernote vs Google Apps Tasks vs Jira

Today Dropbox published a blog post for their new Datastore API, the amazing feature is offline support. I have ever investigated other popular tasks API providers, and want to share some quick summary. I didn't discuss outlook/skydrive/calender staff, and would be focusing company who intent to be service providers.

1. Introduction to Providers


  • Dropbox: Datastore API in Beta, well designed and elegant API for Tasks.
  • Evernote: Evernote does not provide a really SDK or functionality for tasks, but personally I want to make evernote a tasks/project management tool. You can attach your own data to ever note, this would be enough for client tools to filter the note marked as tasks, and category them. The API Documetation here
  • Google: Google Apps Tasks API. Google have provided the tasks API for several years, and there are some tools, chrome plugins.
  • Jira: The enterprise project management tool. They also provided REST API. Jira provided best-in-class feature set.

2. Features, Pros, Cons

  • Dropbox
    • Features: 
      • Provided data store API to handle Table/Record. The data store API is the API to handle generic remote key-value database. You can easily build your task management tool based on it.
      • Support offline temporarily. The SDK works when your apps go offline temporarily, with all its data locally. Accordingly, it provided a way to sync data, and resolve conflicts.
      • SDK: Provided SDK in JavaScript for Web, and iOS/Android SDK.
    • Pros:
      • Flexibility: Because the API is to handle generic NoSQL database remotely, it has enough flexibility for app developers to add their own fields, and store what they need.
      • Temporarily Offline Support: this is essential for mobile apps because they can easily be offline. I can imagine that the Dropbox API would improve the user experience greatly on mobile devices.
      • SDK in JavaScript, iOS, Android can bootstrap the integration quickly.
      • Potentially when you need larger storage for content/attachment of a task, Dropbox would be the best candidate.
    • Cons:
      • It's still in Beta, so not enough support on Search/Filter on server side. So when you have a big data set, it would be a problem in current release. However, I can expect that Dropbox will improve it very quickly!
  • Evernote:
    • Features:
      • Evernote does not provide a way to direct create tasks and projects. It provided SDK to create and manage notes. Notes can contain rich format of text, images, and other resources. You can categories them by Notebooks, or Tags. Application Data can be attached to notes, so you can manage status/estimations/priorities with application data for a note. A task management model for Evernote can be:
        • Put all tasks notes in a special notebooks
        • Use Tags/Parent Tags to build hierarchy of projects
      • SDK: Objective-C, Java, PHP, Ruby, Python, Perl, C#, C++, ActionScript
    • Pros:
      • All your data can be visible in Evernote Client Tools from Web, Windows, Mac, iOS and Android. The official Evernote apps has very high quality.
      • You an do search on server, and leverage great Evernote features like OCR. This is unique comparing with all the other providers.
    • Cons:
      • Even you can add some tasks/checkbox in a note, but that's not a direct way to manage them.
      • Evernote is designed for notes, you need some workaround to make it works as task management tool.
  • Google Apps Tasks API:
    • Features:
    • Pros:
      • Better for integration with other google apps.
      • Simple but complete feature for task management.
    • Cons:
      • No way to extend. For example, if I want to add estimation for a tasks, then there is no tasks properties supported, and there is no flexibility to add customized fields.
  • Jira
    • Features:
      • Jira is already an ENTERPRISE task management tool for team planning and project tracking.
      • SDK: Rest API
    • Pros:
      • Really feature rich, and generally you can get everything done on web.
      • You can deploy Jira Server to your private cloud or internal networks.

3. Summary of Unique Features

  • Dropbox: Allow temporarily offline, and handled sync/conflict resolve well inside SDK, developers don't need to worry about it. Also provided the best flexibility on apps design.
  • Evernote: Rich format for contents in a note, and provided powerful search capability.
  • Google Apps Tasks: Compete API dedicated for simple tasks management.
  • Jira: Provided a way to deploy server to your internal network.

Finally let's back to TeamViz, my task management tool. The goal is to support completely offline work. User can use it as a standalone tool, and also can sync with other desktop apps and mobile apps. None of the modal above can meet my goal, the most close one is the what Dropbox released today, datastore API. But it supports only temporarily offline, you still need to be online to access data.

Wednesday, June 19, 2013

2 reasons why we select SimpleDB instead of DynamoDB

If you search on google with keywords "SimpleDB vs DynamoDB", there will be a lot of helpful posts. Most of them give you 3 to 7 reasons to select DynamoDB. However, today I'll share some experience of using SimpleDB instead of DynamoDB.

I got some issues when use DynamoDB in my production, and finally found that SimpleDB is fit in my case perfectly. I think the choice of SimpleDB and DynamoDB should NOT rely on the performance or the benefits of the DynamoDB/SimpleDB, instead, based on the limitation and real requirement in my product.

Some background: I have some data previously saved in MongoDB, the amount of data will mostly not exceed 2G bytes in SimpleDB. Now we decided not to maintain our MongoDB database servers, but leverage AWS SimpleDB or DynamoDB to reduce the cost on ops.

Both SimpleDB/DynamoDB is key/value pair database. There are some workaround to store a JSON document, but will introduce additional cost. The data structure in my MongoDB is not too complicated and can be convert to key-value pair. So, before you choose SimpleDB or DynamoDB as your database backend, you must understand this fundamental.

Reason 1: Not flexible on indexing. With DynamoDB you have to set indexing fields before creating the database, and cannot be modified. This is really limited the future change. DynamoDB supports 2 mode of data lookup, "Query" and "Scan". "Query": based on hash key and secondary keys, high performance. However, when you query data, “hash” key must be set. For example, suppose we have “id” key as hash key. When query by “id”, it’s good, we can get best performance. But when we query only by a field "name", we have to shift to “Scan” because hash key is not used. The performance of "Scan" is totally not acceptable because AWS will scan every record. I created a sample DynamoDb with 100,000 records, and each record has 6 fields. With "Scan", it costs 2 ~ 6 minutes to selecting ONE record by adding condition on one field. Here is the testing code in Java:

DynamoDBScanExpression scan = new DynamoDBScanExpression();

scan.addFilterCondition("count", new Condition().withAttributeValueList(new AttributeValue().withN("70569")).withComparisonOperator(ComparisonOperator.EQ));

System.out.println("1=> " + new Date());

PaginatedScanList<Book> list = mapper.scan(Book.class, scan);

System.out.println("2=> " + new Date());

Object[] all = list.toArray();

System.out.println(all.length); // should be 1

System.out.println("3=> " + new Date()); // 2 ~ 6 minutes comparing to date after “2=>”, in most cases around 2 minutes

SimpleDB does not have this limitations. SimpleDB create index for "EVERY" field in a table(actually AWS use the term "domain", and MongoDB use "collection"). I modified a little bit the code and test on SimpleDB, here are the results:

  • Query 500 (use "limits" to get the first 500 items in a “select” call) items with no condition: about 400 ms to complete. The sample application running on my local machine. If it is running on EC2, it should be within 100 ms. 
  • Query 500 items with 1 condition, also about 400 ms to complete.
Reason 2: Not cost effective for our case. The DynamoDB charge money by capacity of Read/Writes per seconds. Please note that the capacity is based on read/write your records instead of the read/write API call, and no matter you use batch or not. Here are more details in my test.  I used batch API to send 1000 records with more than 1000 bytes for each record. There will cost 50 seconds to finish the batch when the write capacity was set to 20/seconds. While I keep the my application running, and change the capacity on AWS console to 80/seconds, there will take 12 to 25 seconds to complete one batch(ideally it should be 1000/80 = 12.5 seconds, the extra time comes from network latency because I’m sending more than 1 megabytes data per API call). 

In our case, we may read the 500 records in SimpleDB into memory, but read nothing in next 10 minutes. With SimpleDB we can complete it in 500 milliseconds. With DynamoDB we have to set read capacity to 1000 reads/seconds, and it will cost $94.46 per month(via AWS Simple Monthly Calculator). With SimpleDB, it may cost less than 1 dollar.

Conclusion: DynamoDB is really designed for high performance database. SimpleDB has more flexibility. Here what I mean "really designed for high performance" to DynamoDB is, if you choose DynamoDB, you must make sure you have well designed your architecture for high traffic dynamic content. If you have design your architecture targeting high traffic dynamic content and high performance, DynamoDB may perfectly match your request. In our case, SimpleDB is enough, excellent flexibility, and cost effective. Before looking for the comparison of SimpleDB and DynamoDB, design your architecture first. DynamoDB is good, but not fit for everyone.

Here are some useful links:

Sunday, June 2, 2013

Cross Platform - Initial Idea

I worked on a commercial product for 7 years. have more than 400 million dollar revenue per year. That product can running on Windows and Mac, also a lite version on web, android and iPhone/iPad, and have data interoperability across all the platforms. We have investigated various possible techniques to support cross platform development using C#/C++/Objective-C, with some framework like Qt framework, as well as some other approach like HTML+CSS+JavaScript for cross-platform features. I want to share my working experience on some technologies that support cross-platform development.

Decades ago, when the 2nd operation system came to the world, there is the needs for cross platform development. We need to choose the target platforms based on current marketing shares. Here is the majority target platforms:
- Desktop
 - Microsoft Windows
 - Mac OS X, Apple Inc.
 - Linux(My favorite distribution is Ubuntu)
from https://www.netmarketshare.com/, February 2013

- Mobile
 - Google Android, there is also difference between handheld and tablet.
 - Apple iOS, there is also difference between iPhone and iPad

from https://www.netmarketshare.com/, February 2013

For this series of articles, I'll starting with this roadmap:
- Programming Language for cross-platform development.
- Review of frameworks for cross desktop operations, e.g. Qt, Mono, wxWidgets
- Review of web as platform: HTML 5, Native Client
- Review of frameworks supporting multiple mobile frameworks, e.g. PhoneGap, Appcelerator/Titanium

Sunday, May 26, 2013

LEAN5: Does user really need data synchronization

Does user really need data synchronization across multiple devices?

As mentioned in this post, I started a new version of PomodoroApp and finally supported data synchronization across multiple devices. The initial idea is, PomodoroApp is a cross platform application, and in future it will be on both iPhone and Android. I spent a lot of efforts on supporting data sync, and spent money on servers. Let's see the result:


After the data sync is available, the visits to my site is still getting at the stable speed(the red line with a slope). The new version with data synchronization looks does not have any impact to the slope. So even users coming in with a stable increasing speed, I hope more people will be interested on buy the license. The page "pricing-plan" was set as goal page, and you can see the conversion rate. The conversion rate to the page "pricing-plan" keeps around 20%, no change when this new feature introduced.

So, on desktop side, data synchronization may not be a "must-have" feature. I think it "should-have", actually it's "nice-to-have". This is somewhat anti-lean-startup. Actually "data sync" is not what user requested mostly, and user never mentioned mobile support before. What they write to me most, is still on user experience side.

The good thing is, more and more customers requested to make it available on mobile devices. The mobile app still not released yet. Let's see what will happen once data sync across mobile and desktop is available.

LEAN4: 3 Lessons Learned on Creating Cross-Platform App

3 Lessons Learned on Creating Cross-Platform App


As what happens on last year been summarized in this post, I'll continue to share what I did recently. I have the ambition to support 5 platforms, Windows/Mac OS X/Ubuntu/iPhone/Android. It's not as smooth as what I expected, and here are some lessons learned.

  • Don't use web technology to make native applications on desktop unless you have no choice. My first version of PomodoroApp is built on Qt with C++. Not including the time in school, I have 10 years of experience of using C++ on commercial software. Starting from 2002, I worked as part-time programmer to work on backend services for GIS applications, and now enjoy the new features of C++ 11. Native applications are really good at platform, and C++ is easier for me to do optimization. However, because of the increasing of web technologies. I decided to use ExtJS and TideSDK for new version, the simple initial idea is to share code for PomodoroApp on mobile devices because I can use Appcelerator Titanium or PhoneGap to reduce the efforts of making apps on iPhone and Android, and sencha provided Sencha Touch for mobile and I can share most of the code with ExtJS. Ideally, I can share 80% of the code on Windows/Mac OS X/Ubuntu/iOS/Android, the only thing I need to revise for different platform is UI. However, here are a list of issues that I have to resolve:
    • Limitations. Desktop and Mobile devices have specific limitations. For example, SQL API in TideSDK is synchronous API, while when it comes to mobile, SQL API in web page is actually HTML 5 sqlite API, it has 5M bytes limitation. More worse, it provided only asynchronous API, so the logic of the code will be different with desktop. Making it clear on limitations is important because features may not be able to achieve.
    • Performance. In somewhat level it depends on the libraries selected. Web technology may looks good at beginning, but I'm sure it will get really sluggish when more and more components added on one page. In addition, for ExtJS, the one page application, it's really easy for memory leak.
    • Dependencies. C++ can access platform features, while JavaScript cannot. What your application can achieve depends on how the cross platform SDK provided. For example, in 2.x of PomodoroApp, I updated the application icon on dock bar/task bar. However, it's impossible to achieve unless you can add new API to TideSDK.
  • Don't mess up the library of your core business logic with platform guards. It's acceptable when I have to handle different logic in my library to have something like if(platform is Windows){...}. However, I didn't imagine the complexity at the very beginning until I started porting on mobile devices. When there are 5 platforms need to be supported. The guards section in code is really what bugs come from. Especially in JavaScript because there is no compiling time verification.
  • Triple your estimation when porting from desktop to mobile. Even you have lot's of reusable code. There may be lots of issues never happened on desktop side. For example, Apple App Store review rejected my app several times and I have to resolve all the issues they proposed. Generally I need 5 working days to get review result.


Saturday, May 18, 2013

LEAN3: Updates After 1 Year

Updates After 1 Year

There has been about 1 year since my last blog about The Lean Startup in April 2012. What happens during this year?
  • My majority responsibility changed from desktop software side to cloud services. It is perfect to start to working on new technologies. The bad thing is, I'm really busy and exhausted, and din't have time to take care of PomodoroApp. I spent a lot of time learning the new technologies, programming languages, existing design for the system on the new division. Everything on PomodoroApp has been stopped for 8 months until the last Christmas.
  • My baby was born. Kelly and I are very happy to have our first baby. Kelly spent a lot of time on taking care of the baby, and didn't have enough time on marketing/customer services/ux design.
  • The traffic of the website of PomodoroApp is increasing every month with a very stable speed. You can see the traffic report by google analytics below:

  • I'm getting more and more familiar with JavaScript and cloud technologies. I'm now confident with cloud technologies.
By the end of 2012, I can have 2 weeks vacation. I decided to leverage the 2 full weeks for a brand new version of PomodoroApp, with following majority changes:
  • Programming Language and Libraries: The version 1.x and 2.x are built based on Qt, a C++ library. I used the old but mature UI technology, CSS style, to create beautiful UI controls. Qt and C++ are powerful and fast on every platform. I also write some code with Objective-C on Mac and COM interface on Windows to handle some OS specific features, e.g. dockbar icon. With the recently experience, I'll shift to TideSDK and ExtJS. With TideSDK I can create beautiful and unique desktop apps using web technologies. ExtJS is a web application library with a lot of samples, but may need more time to learn. Other libraries like jQuery UI has been considered also, but not has necessary features in place for me.
  • Data Synchronization Service: because PomodoroApp is cross platform, currently on Windows/Mac/Linux, and will support iPhone/Android also. There is no reason not to synchronize data across devices. I use node.js + MongoDB as backend, and host servers on Linode, Digital Ocean, and Windows Azure. That's really strange infrastructue architecture. The puporse is to reduce the cost, and try servers from different providers.
  • Mobile Platform: Appcelerator Titanium is really fantastic for cross platform mobile development. Again, need some time to learn at the beginning.
So with ideas on every aspect settled down, I start to work with full of enery. Because I didn't always have this kind of long vacation for my own projects. I started the first release of PomodoroApp at the end of 2011. The main reason is that I have 2 weeks vacation, and I need to take the advantage of the long vacation. I have to admit that I also made some bad decisions when building the idea. I've listed some topics, and will have some blog posts to share my experience and lessons learned in future. So, what's the stuff about "Lean Startup"? This is a summary of what happens in the past year. I'll start a new blog post about the detail of this new version.

Sunday, May 12, 2013

A Story of "Design for Failure"


When we come to the era of cloud computing, what's the most important factor you can imaging for the cloud computing? You may think of scaling. It could be, scaling is very important when your business getting bigger and bigger. You may think of backup, it always should be. You may also think of programable computing resources. That's a really important concept from AWS. Machine is programable, you can programmatically add or delete a  machine within seconds, instead of purchasing from vendor and deploy it to data center. You can allocate a new reliable database, without dependency on operations team. However, as a startup, my business is starting from scratch, and I do everything myself. In my practice, "Design for Failure" is really the top priority at the very beginning.

As AWS providing EC2, and other vendors providing VPS, it would be a common sense to use VPS instead of building your own data center when you are not so big. Scaling is not so important because I'm still very small, limited machines are enough to support scale of current users. But I do designed for scaling in future. Design for failure? Yes, I have considered, but not so seriously. My VPS provider, Linode, claimed a 99.95% availability, and Linode has very good reputation in this industry. I trust them.

Some background around my online service. I released a new version of desktop application PomodoroApp at the end of 2012, and support data synchronization across computers. User will rely on my server to sync data. It's yet another a new service on Internet, no one knows it. I'm not sure tomorrow it will be only 1 new users or 1,000 new users. Although I designed a reliable and scalable server architecture, I applied a minimum viable architecture for servers in order to reduce the cost. Perhaps nobody will use the service in next week. 2 web servers, one to host my website, and another to host a node.js server for data synchronization. It provide only rest services, I'll call it sync server. 1 MongoDB database server instance. Each one can be a single point of failure. It's acceptable if I have 99.95% availability. My sync server is in a very low load, so I configured the sync server to be the secondary of MongoDB replica set. The server code also support accessing data from replica set.



Everything ran very well in the coming 2 months. I keep improving the server, adding new features. Users came to use my service from google, blog, Facebook, twitter, and increased with a stable speed. When I have new code, just need 1 seconds to restart service. February 17th, 2013, for an unknown reason, database server is out of service. Nobody knows the reason, Linode technical support managed to fix the issues. When database server was down, the secondary database on sync server became primary, and all data read/write switched to database on my sync server automatically, this may take 1 minute, depending on the timeout settings. So the outage of the database server has no impact to my sync service. 

However, I'm just lucky for the incident of Feb 17. Just 3 days later, my sync server is down, and I even cannot restart the server from Linode managed console.  This took 55 minutes. I got alerts from monitor service pingdom, also from customers report. This is the first lesson. So the single point of failure does happen. I decided to add more sync servers. Consequently, a load balance server is necessary for 2 sync servers. In addition, I added the 3rd replica set which has 1 hour delay from primary server. In case there are any data broken, I can recover it from the backup server. You may ask why 1 hour delay instead of 24 hours. Ideally there should be multiple delayed replica set servers. In my production environment, user count is still small, and there is no necessary for sharding so far. But my new features, or my changes to existing code is only tested on dev environment. When I deployed it to server, it may make damage to server. I need a backup plan for this case. Even there are still SPOF, it 's much better:)

The real disaster happened in May 11, I am going to deploy new version which resolved some issues on database. The new version handled index creation on database. I use a web based admin tool to manage my MongoDB instances. When I connect production database for final release testing, I happened to found a duplicated index on the collection. I'm not sure why this happen, so I deleted one on admin tool. The tool reported that 2 indexes are both deleted. Later when I continue my testing and try to sync data to server. I got the error that failed to commit to database. This never happens before. Then I use MongoDB console to check the collection. What made me surprising is, the whole collection is lost, neither to be created again. I shutdown the MongoDB server, and then try to restart it. Failed! The database log indicates "exception: BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO". Googling the exception does not help much. Oh my, finally I have to recover the database. Fortunately I have a replica set which have realtime mirror for the database, and another replica set which has 1 hour delay for the database. I spent about 2 hours on fixing the issue, but my sync service is still online and functioned well. Because I have "stepDown" my primary and the secondary is now work as primary. Doing these troubleshooting does not hurt my online service. MongoDB really did an excellent job on the replica set pattern.

Initially I decided to recover the database from the replica set which has 1 hour delay. But it's in another datacenter, I use scp to copy data file, only 1.7M bytes/seconds, I have 9G bytes data in total. That would spent a long time for copying. Then I checked the new primary database, fortunately found that the new primary(the old secondary) is in good shape, the data file does not broken. Then I stopped the primary database, and spent about 2 minutes to copy all the files with a 29M bytes file transfer speed within the same datacenter. Again, it's still a very small business. 2 minutes outage is acceptable, because my client software support offline mode, it has local database, and can work at the place without Internet. When the network is available, it will sync to server. Some users even disabled the sync feature because they don't what to upload any data to server. After all files are copied, I restart MongoDB. It took several seconds to recovery the uncommitted data from oplog, and try to duplicate from the primary server. Everything works well now. MongoDB rocks!

Even I have the ultimate backup plan designed and tested on my client software, it still make me tense very much. Actually my  backup plan is, if the whole database is lost, I can still recover all the data. My client software supports offline mode, it duplicated all the data for the user. Automatic data recovery from user's machine to server has already been there. 

This story is the first real disaster for me so far. I respect VPS provider Linode, and respect to software companies who provided linux server, node.js, MongoDB. But it's really a must to keep the "design for failure" the top priority even you are very small. The hardware may be outage, the software may have bugs, the IO or the memory may be corruption. Hackers may need your server. People may say, the only thing that never change is change. My lesson is, the only thing that never failure is failure. Without these lessons, "Design For Failure" would never have so tremendous impact for my future design.