December 11, 2013

Enable HTML5 Offline functionality to your gDrive hosted web apps!



Application Cache it's a pretty neat feature of HTML5, for it allows us to enable Offline functionality, faster loading times and reduced server load by caching a local copy of specific files when the user first access them, it can also be used to route the user to a special Offline version of our HTML code on specific parts of our site/app.

To do this one must create a special Manifest file, it is a very simple text file were we specify the offline cache behavior for our app, the most important parts are where we specify which documents should be cached and the creation date, this last one even though it's not technically required for the manifest to work, it is required to tell your App to update the cached version automatically next time a user logs, without it the cache will never be updated for the users and the only way they could get updates it's by cleaning they're browsers cache. It is also important to notice that most browsers won't allow more than 5Mb of cache for a single site.

We also must add a reference in our HTML code to tell the browser to look for the App Cache manifest, HTML5 makes this very easy, it's a very short declaration in the html tag of our app, for example:



Here we are telling the browser to look for a file called example.appcache in the root folder and use it as an appcache file.

I recommend reading a more detailed description of how to make an App Cache file on this w3schools.com article.

It sounds so easy right! but the trick is that the name it's not really relevant, the .appcache extension it's just done for standardization, technically it could be .awesomeOffline and still work (but please don't ;) standards makes collaborative work easier) what actually matters it's the MIMEtype of that file from the server which must be " text/cache-manifest " and that must be configured from the server, and most HTML hosting services doesn't have this option, and that includes the awesome Google Drive html hosting service, but as most things in this digital world, you can hack a way around this limitation.

To add the manifest file, first we need the ID of the root folder we are using for the gDrive HTML hosting for our application (in this article you can read more about how to host your HTML/JS/CSS code on gDrive), remember that you can find the ID for the folder when you open it from the web version of gDrive it's at the end of the URL in your browser.

Now that we have the folder ID, let's create a new script proyect and let's hit the code!

Let's get a good look at the code, it's very simple!



The most important part it's the MIMEtype, once created you can edit the text file to update it, personally I prefer to delete it and run the script again to create a new one, that way I'm sure that it's got the right date and everything.

AppCache opens a new world of possibilities for HTML5 web apps and web sites! Just be very careful when choosing which parts to cache, usually you should do thisonly with parts that doesn't change very often, like the libraries (and most of the Javascript code) and the CSS. If used properly it will greatly enhance the speed and therefore the user experience, and that must always be our main priority, remember that we are now entering a coding age where UX design really makes the difference.

Feel free to post your request, questions, ideas or simply say hi! on g+ or at the end of this article, your feedback it's very important and greatly appreciated.

Happy coding!

September 15, 2013

ACID compliance with Live, Non-SQL, Eventually Consistent DB's


One of the main complains I hear all around about Non-SQL is the ACID compliance and developers making crazy "fixes" to handle eventual data consistency, as I mentioned on this previous post I feel this is mostly because developers are trying to emulate the SQL ways, also they are used to be told how to handle everything, but in the new Non-SQL world, it all boils down to the developers creativity and mostly, its ability to find a syncretism between different approaches and platforms.

Non-SQL and eventual consistency requires a lot of extra creativity, but also gives lots of extra flexibility and opens the path to new and smarter software.

SQL likes to put a lot of limitations and rules for the developer and, in most cases, to the user, specially when it comes to data formats and relations, in Non-SQL the JSON format it's usually the way to go, and JSON doesn't care if your data is a Boolean, string, number, URL... so long it's text and you pack it in a {'key':'value'} format, he's fine with it, also, it allows for different kinds of data to be treated as equal, this makes easy to create a very powerful "Universal Searchbox", programmatic backups and clean-ups are easy and transparent, and you can join all this backups and use them to make awesome reports, by using each backups data as a point in time, and drawing flow charts... Neat huh? but unlike SQL, you won't find much guidelines to tell you how to face this concepts, so you really need to know how to achieve all this in a very precise and creative way... or things can go pretty nasty.

Let's see how we can face ACID compliance first:

Atomicity: The "all or nothing" rule. This is important, but instead of thinking of the whole point-to-point process as a single atomic transaction, here you must "split" it in different transactions, the most critical part is usually the Live Data, I personally like Firebase for this, so the most critical atomicity is between the client's AngularJS (or AngularFire) and the Firebase servers and this is already solved in AngularJS, also there is a lot of people constantly checking and upgrading this. The servlets  are atomic too, and its easy to program them to push a new line with the details of the error in a special log, but even if there is a mayor problem in the servlet or servlets, the Firebase should stay up and running, so the critial parts of the system keeps working as normally as possible. So I recommend always aiming for "isolated atomicity".

Consistency: Only valid data must be written, but again, one must be more specific about what is "valid data", in SQL if you send a number instead of a string, hell unleashes... or even if your string is 1 character too large (or even too short) it just won't work, in the paper this sounds like a way to prevent problems (for the IT staff) but sending extra complexity to the user for the programmers convenience doesn't seem right anymore. By using JSON your consistency focus only that the data is represented correctly ( Eg. [{"key 1":"value 1"},{"key 2":"value 2"}] ). but it doesn't worry about the content's, this makes everything much more flexible.

Isolation: Ideally one will be using a modular approach when working with Non-SQL, usually involving different platforms, so you should not only isolate the transactions, but actually split everything in parts for the eventual data consistency. Usually this can be done by splitting your system in small Live DB's making geographical location the main focus, this usually prevents for 2 people working on the same thing at the same time.

Durability: One very important thing about DB's is backing up your data, ideally you wish to take automatic periodical snapshots of your DB's, (this is very easy with Firebase and Google Drive using a GAS servlet by the way) and once you have periodic snapshots, wouldn't it be nice to put the all togheter, and see how the data changes through time? This can be done with BigQuery, Datastore and a GAS servlet to import the data from Firebase, usually you will add the "snapshot" as a new bucket in Datastore at the same time you make a historical backup on Drive, but you can adjust this according to your needs and budget, to lower prices, you can backup once a week on Drive and once a month on Datastore (for BigQuery).

Now let's think of a crazy example to give you an idea of how you can put together an imaginary software to handle a few warehouses, since this is Non-SQL, this model can be scaled up or down as needed, and new "modules" can be added or the existing ones can be adjusted or replaced by a different approach without problem.

I really recommend an MVC data architecture, so you can separate the UI from the backend and the controllers, this will allow you to focus on one problem at a time, also will simplify future updates, fixes and additions.

In this imaginary example we are going to be using 3 different HTML5 web apps:
  • Warehouse Administration: This should allow each warehouse to handle all it's contents in real-time so all the employees are aware of were everything is and notice any change instantly, if something comes in, out or change it's place in the warehouse this changes are instantly reflected on the view of all the warehouse personal, this way, no matter how fast things change and how many tasks are happening in parallel, everyone's work will keep in sync.
  • Administrative Central Office: This is the software for the staff who take part of the business decisions, they do not need to know the whereabouts of each screw in the warehouse in real-time, but rather the statistical data that comes out of the warehouse operations, mostly product sales performance, personnel efficiency and all sorts of statistical data analysis, so this part of the web app should focus on transforming many (possibly large) historical files into easy to understand charts and reports.
  • IT support: This department should be able to access all the DB instances, the backups and Webapp code and also be able to make changes on any part of the system fast and as transparent to the user as possible, also they should be able to communicate with all the staff to provide assistance and the system should detect, file and report in real-time any problem in the system.
Now let's elaborate more on the characteristics of the system:
  • All the people on the same warehouse, should have instant, real-time access to that specific warehouse data to keep things agile, even when many transactions are happening at the same time.
  • The warehouse staff must be able to quickly find any specif object or group of objects stored in the warehouse, by providing any arbitrary property (name, bar-code, serial, size, position, etc) and the system must show the matching object(s) instantly and with live-data.
  • The people of one warehouse doesn't need to know all the time what's happening at the other warehouses.
  • Every movement inside the warehouse must be logged, things like what was the change, who did it, why and when must be always be properly logged.
There must be the following chats:
  • For each warehouse, the administrative staff and the IT support staff.
  • Shared across all the warehouses.
  • Between the administrative staff and each warehouse, separately.
  • Between the administrative staff and all the warehouse.
  • Between each warehouse.
  • Between anyone and the IT support staff.
There must be the following logs:
  • When something it's moved inside the warehouse.
  • When something enters the warehouse.
  • When something leaves the warehouse.
  • When there is a system error.
  • When someone requests assistance from the IT staff.
  • All the personnel's everyday check-in and check-out.
  • Every activity from each member of the warehouse staff.
Before going into the implementations details, I made this simplified data flow to give you a better picture, note that not all the links are displayed and the solutions proposed here are only an imaginary example, I strongly encourage you to experiment and adapt the general idea to best fit your needs, budget and style.


Let's browse this from left to right, okay? here we go:

  1. First we have the HTML5 app used by the warehouse staff, this is were the real action happens, here the employees from the warehouses receives they're orders, find things and make reports about every move in the merchandise.
  2. To handle this we are using 3 Firebase DBs, one for each warehouse, this isolates the data and allows for precise tracking on each warehouse activity (Firebase includes an Analytics service). To connect Firebase with HTML5 I suggest AngularJS (AngularFire). Note: A single Firebase DB could handle all you need, but we split it for isolation, to balance the Quota and keep the costs minimum. Also there is a 10mb limit for exporting/importing data over most platforms, so splitting and periodically backing up and cleaning the Firebase DB's (always in that order) it's important. If 10mb it's too small for you, you will need to "zonify" your DB's, split the Firebase DB and back them up separately, Eg. [{"Zone 1":{},"Zone 2":{}}]. Since this is done by GAS servlets and not by the HTML5 app or Firebase itself, you can make changes to this as much as needed, usually without "downtime" and we always have the option to turn the servlet back to a previous version.
  3. We have an additional Firebase DB to handle the chats, you can handle as many chats as you need using a single Firebase DB, it is not on the diagram, but you may wish to add an extra GAS servlet to log and erase (from Firebase) the chats, usually at the end of the day.
  4. Then we have 2 GAS JSON servlets that uses GAS URL fetch to get backups from Firebase and pass the data to Drive and Datastore using Firebase REST API, after backing up it's time to call another servlet to "clean" the DB from articles with "null" values, this will avoid wasting space on items that are no longer in the warehouse, but we keep them until the backup for historical reasons and it's also a best practice when dealing with eventual consistency (even if not needed in this part).
  5. After that we have a Drive account that recieves the backups from one of the previous servlets, I recommend making this programatically at least once a week, and creating a special folder for each year (or month if you have lots of backup files). This files are mostly as a redundant safety feature, and also makes very easy to find the most recent backup in case of some serious problem, simply upload the right backup file from Drive  to the desired DB using the Firebase Web administrator, once the backup finishes uploading, the data will be instantly updated in all the users connected to that DB (but the data made between the backup and the recovery would be lost). I recommend making a backup before a recovery, even when there is a problem, so the data can be analyzed later and something can be  recovered if needed.
  6. Then we have the Cloud Storage JSON API that receives the backups from the other servlet, you can find how to do this in this official How-To. We need the information on Cloud Storage so BigQuery can use it.
  7. Now we have 2 JSONP servlets, the difference between a JSON and a JSONP servlet, is that JSONP has a special padding (also called "Prefix") which is used so the client's code know's which function should handle the response and it's the one we usually need when using GAS to link Google Services and HTML5 web apps. The first servlet for communicating Google Drive and the HML5 App and the second one it's responsible of receiving querys from the client's web app, pass it to BigQuery, wait for the response, pre-process it to best fit your app's needs and deliver it back to the web app when it's done or report (the user and the central error Log) that something went wrong.
  8. Then you will see the IT support HTML5 web app, this one is mostly used for client support,but it should also have direct access to all the servlets, log files, analytics data about the web app's use and the analitycs for the main components (Firebase DBs, specific servlet number of calls, etc),  and automatically show alerts when some servlet reports a problem. The focufs of this App is to monitor the Apps Performance and integrity, facilitate upding the App and IT support chats. A tip, use a GAS ScriptDB central library to count the number of times the servlets are being called, you can also use it as a central cache and for transactions that are same-time overwriting susceptible.
  9. And finally the HTML5 web app for the Administrative staff, here almost everything it's based on historical data, usually parsed into charts for convenience, they care about warehouse, merchandise and personnel performance and it's what one must focus when building this web app. 

Hope you enjoyed reading this article as much as I did writing it and that it inspires you into creating the perfect data-flow for your next project, don't be shy and write your thoughts about this article. I love reading comments, ideas, corrections, suggestions and particularly appreciate critics.

Happy coding!

Here is some of the material I recommend reading for this kind of project:

http://cesarstechinsights.blogspot.mx/2013/08/insights-tip-non-sql-db.html

http://cesarstechinsights.blogspot.mx/2013/04/how-to-sharestore-spreadsheeds-as-json.html

http://addyosmani.com/resources/essentialjsdesignpatterns/book/

https://developers.google.com/datastore/docs/apis/v1beta1/

https://developers.google.com/bigquery/loading-data-into-bigquery

http://angularfire.com/

https://www.firebase.com/docs/rest-api.html

https://developers.google.com/apps-script/defaultservices

http://databases.about.com/od/specificproducts/a/acid.htm

https://developers.google.com/storage/docs/json_api/v1/how-tos/upload

http://stackoverflow.com/questions/16239819/performance-of-firebase-forge-with-large-data-sets

https://developers.google.com/storage/docs/json_api/v1/how-tos/upload#multipart

https://developers.google.com/apps-script/external_apis

https://github.com/GoogleCloudPlatform/storage-metabucket-javascript

August 31, 2013

Insights and tips for working with Non-SQL Databases


The problem with Non-SQL DBs is that most developers doesn't really know how to use them on business grade solutions and they tend to keep all the "bad habits" and approaches of SQL... A big problem they face is when they want to make relations between data sets and they need a "direct dependence" like in SQL. I see the relational approach as one of the biggest SQL flaws and not the other way around, why?

Well, fisrt it makes the development to focus more on the data model due to it's complexity than the user experience and the UI needs, this is why systems tend to be complicated to use and providing very little flexibility for the user. Also, relational data doesn't always handle well with parallel data manipulation (due to the relations between tables). Software architecture must follow hardware architecture for best performance, and hardware its focusing on parallel processing, so does modern software, and this requires a parallel data-model.

Another problem with the relational approach is that since lots of things depends on others , it's very easy that something goes wrong and this usually ends with an error message or at least not being able to perform an operation.

Developers really need to learn how to use Non-SQL systems, JSON it's a great system for data administration, if you know how use it properly it can do things that developers and users alike only think as "utopias".

A simplified explanation on this "new" concepts of data-model:


Opposite to the horrible "normalized" data in SQL, here every object holds all the properties and sub-properties related to it (de-normalization/high data replication), the system only groups objects who share some specific property or properties when performing a query. This gives the system great flexibility and stability, even if there is data corruption, only a fraction of the data gets damaged and it's usually easy to "clean" and get back to normal, if this ever happens.

When you talk about de-normalizing data, everyone thinks the table will become HUGE, expensive and slow, but JSON it's actually very space-efficient, it's just text, comas and parentheses! and data gets formatted in a very efficient way compared to SQL, you will be amazed about how much data you can hold over 100mb. Also, smart systems allow data to grow without impacting performance significantly. Another thing that impacts the size of JSON is that it does not waste space stating dependencies and kinds. Also it is highly compressible (due to the high data replication) and therefore works great with data compression (Apps Script has special zip methods for this and most servers support or can be set to support it), use this when you need to benefit bandwidth over processing time.

Objects do not need to declare the kind or size of the data it holds, JSON doesn't care or need this information, again, this provides outstanding flexibility.

Let's use a Warehouse as an example of data stored in the DB, SQL  couldn't be worse for this, still, she's the queen of this realms, shame it's such an old queen not willing to catch up with the times, why? well, this lady forces everyone to her rules and expectations, and refuses to accept things she doesn't know and understand well in advance, also, she expects uniformity and it's a big fan of bureaucracy, her kingdom has been long and now everyone it's used to its ways, a different reality seems too utopic to really work and most of those who has tried to taste the Non-SQL kingdom, after a life of following the Queen's demands, they start feeling lost and keeps trying to emulate the Old-ways in the new realms, of coarse things go bad.

The real-life Warehouse holds objects of all kinds, very different from each other and they change over time, also new things comes with new characteristics and maybe it no longer uses some of the old ones. Even though they can be very different, they are all together and if peace it's to come to the warehouse, everyone must be treated as equal, so the little screw that is sold by weight (instead of pieces) and the refrigerator with a serial number can be stored next to each other and open the path to smart and universal search, were you type any property or name and simply get the matching result or results, and then decide what to do with what you found, also one expects to be able to do anything (reasonable). Note this also makes the universal search, the universal windows (to some extent).


Every object knows all its details, you know, like in real life... so it doesn't need to have it's information spread across different tables, habitats of the SQL kingdom are always trying to keep a different table for everything, group everyone according to some arbitrary attribute or relationship, but in the Non-SQL world, this is evil...possible, but evil, and when certain information changes, and you need to change other object properties in response , fire warnings or simply keep a record, most developers has problems when working over Non-SQL DB's, they want especial tables and fixed relations to handle this, but here you don't bend reality to fit your model, but the other way around... So, if the amount of screws changes or if the fridge got sold, there are separate "robots" (usually JSONP servlets) that are notified about this, AFTER the information on the table changes, this way, the user doesn't need to wait extra time for all the bureaucracy to be done, this robots updates a separate object values (different table, same table or even both), writes a record on a list (a special drive file for example), notifies an external server or do any kind of action, Google Apps Script its great for this, also this GAS robots bridge your App, the data model, the client and the DBs when needed.

Also, this robots are specific and works in parallel, each one knows who is authorized to use it and who doesn't, can keep track of who, for what and when it was called, without affecting it's performance. If one fails, it's easy to spot and return to a previous version while a fix is done, in the mean time, everything will usually keep running and the system will just be missing that part of the processing. Smart programming can handle to update everything else back to normal once the fixing it's done, all this with no or just partial downtime. Once the fixing it's done, you simply update the public version of the GAS robot, and that's it. This kind of things just doesn't happend in the SQL's kingdom so easily.

Also, this way it makes more sense to plan your eventual data consistency, and for this the key its to prioritize, to know were the data must be up-to-this-second and were it can be a little bit outdated. Don't get me wrong, all data can be live, it's possible to have 10,000 users working live over 10 Tb of data (have you seen Firebase yet?), but it will be expensive as hell... in this realm it's not about if it's possible, there are many ways to do a lot of things, the tricky part is actually to balance everything, in this case, live-data VS cost. My best tip, keep the reports and notifications in a non-live DB.


One way to keep your live-JSON-data clean is to make programmatic backups, and after the back-up, clean the live-data table from all objects that represents empty things (tip: this is were you must erase something when its amount reaches 0, or hits a null value, not before, for historical report convinience and to avoid errors in eventual data consistency handling).

This backups are best done by having a robot (GAS servlet) query all the JSON in the DB, stringify it and save it in a special Drive file, with the date as the name, in a specific Google Drive folder; another way (best for small/medium applications) its to dump the data inside a Google Docs Spreadsheet (exclusively or additionally).The appearance of this new file must trigger another robot that will search for the objects that needs to be cleaned, and erase them from the Live-Data. No-downtime or risk of loosing data that happened during the backup/clean process.

Tips for professionals: If you require fast-direct communication between your robots, with the possibility to "hold" specific data writing until someone has fished. Apps Script ScriptDB it's your solution. If you require an extra punch of GAS robot performance, sometimes Apps Script Cache Services hold the key. To import data, prefer REST and use Apps Script URL Fetch Service. To export data and communicate with traditional servers, use Apps Script JDBC Service and/or Apps Script XML Service. To export data to your HTML5 web app use Apps Script HTML Services or Apps Script Content Services. If all you care is price and scalability, go for Google Cloud Datastore JSON API, for live (but much more expensive) data go for Firebase or Google Drive Realtime API (here you can find a great github example of how to use the RT API in AngularJS, created by the awesome programmer +Steven Bazyl , and to handle the async data loading, AngularJS works great. Mix and balance things for best results.

But when this method really shines is when it's time to make historical analysis of data, so long it's done using the right tools of coarse, the very best way to do it is through BigQuery, it's capable of importing and processing terabytes of data in seconds and GAS can then pass the results to Google Charts to make amazing displays of data, you can easily compare how specific things behave over time, and it can be years of countless data. If you are using Spreadsheets to store your data, you can save a lot of trouble (and money) and use Fusion Tables instead to merge and analyze your data.


Well, first and most, very few people knows how to properly use this technologies, it's not what you find in common books, learn at school, or simply be able to ask your old teachers about, it's new stuff! Also, it's implementations, being based on open standards, works with so many things and in so many ways, that it depends a lot on the developing team skills and imagination to be able to balance cost, performance, compatibility and flexibility, most developers are used to be told how to do things, to follow standard procedures and follow guide-lines, this all applies here, but material is yet to be written, but this opens the path to those ready to push the limits and show what they can do, now more than ever developers can show they're skills and stand above even the rulers in the SQL kingdom.

The trend right now is to build convenient systems, that are accessible from any platform and users expect that the system adapts to their needs, now everyone asks that the software adapts to them and  to help them do things, hates needing to learn and adapt to the systems need and being forced to help it do what it was made for...

For all this (and so much more), a flexible web-based model like HTML5, with a flexible UI like CSS3, a cross-plattform code like Javascript, a CORS data-model like JSON and a Non-SQL back-end makes sense for the future of applications, and remember that your work-flow must be at least as flexible and responsive as your data and UI.

Thanks for reading and happy coding!

August 27, 2013

How to get all your Google Drive folders and URLs with GAS JSONP servlet




Using all the folders of a Google Drive account as a collaboration point over an external web app sounds pretty convenient under certain scenarios, especially for web-based Centralized Business Management.

Apps Script makes this easy with Drive Services and with very little effort you can make a GAS JSONP servlet that query's and packs all the folder names and URLs in a single JSON string, this is specially useful if you are trying to merge different accounts over a single interface. You can get pretty creative about how to implement this.

To make your own GAS JSONP servlet to handle your G.Drive file names & URLs, create a new Google Apps Script project and copy the following code:


Depending on what you need, you might add more data about the folders, this are the most relevant methods:

1.- getAccess(email): Check the users rights on this folder:

2.- getDateCreated(): To know the date when the folder was created.

3.- getDescription(): To get the folders description.

4.- getId(): This one it's particularly useful when working with JSONP, you can use the padding to tell the servlet to do something to a specific folder using the getFolderById(id) method.

5.- getFiles(): It's possible to add all files inside each folder in our JSON package, just add another While cycle but inside the folder's while cycle, this way you get all the folders name, URLs and with all it's files names and URLs, always use parallel loading when using this, in most cases the servlet will take some time to parse all this data.

6.- getSize(): Get's the number of bytes used to store this folder in Drive, in case you need to calculate loads.

Here is a very basic script to call the servlet from your HTML, for testing purposes the prefix it's set to "alert" so the data will be displayed on a pop-up as soon as it loads. For real-life applications change the "alert" prefix to the name of the Javascript function responsible for parsing the JSON, I also recommend
using AngularJS to handle parallel data loading when calling different servlets.


The HTML already includes the URL of a fully working servlet, feel free to use it for testing, but before you can use it, you need to grant access to the servlet, this is a 1 time requirement and it's done by manually calling the servlet's URL:

https://script.google.com/macros/s/AKfycbyixGlm9VJb9RuK_1ZYbsLrduuee4RX2v27mzKvtiFiTm5WaKRK/exec

Stay tuned for articles with more advanced uses of this approach, feel free to post your ideas, comments or suggestions.

Happy coding!

August 23, 2013

How to backup your FirebaseDB on Google Drive using Apps Script!




I've recently started using Firebase, and I've got to say that it's pretty awesome! Especially if you use AngularJS as your front-end you can start making live Apps in minutes, and thanks to Google Drive HTML hosting you don't even need to waste time looking for a place to host everything.

I really recommend trying the example "Wire up a Backend" found on the home page of the AngularJS project.You won't believe at first that "that was it", it's so fast and easy that I'm sure you will think that there is something else to do before it starts actually working.

Firebase can do a great job as a Live Data back-end, but it's not very appropriate for historical records for a few reasons:

* Storing static backups takes too much space, and it should be used wisely (the free 100 Mb quota can run out quickly if you are not careful).

* Downloading all the data every time you want to make a report consumes a lot of bandwidth quota, especially when you call many backups to make a time graph with Google Charts.

* It's best to save those precious connections, since those comes with a price too if you exceed the free 50 concurrent connections quota.

So, instead, it's best to save a backup of your precious Firebase Data on a safe place, Firebase already  has a button that let's you save a backup of the interface anytime you want... and that's good! but let's make it great by storing it automatically on a specific folder using the date as the file name. Google Drive's free quota it's more than large enough for millions of backups on most applications, no bandwidth limits and it can hold up to 10 requests per second/user, and they give 10,000,000 free requests a day... So feel free to let users query and make those cool graphs all they want from Drive.This way you use only 1 connection, for a very short time and you only need 1 download (usually once a day) to get a copy of your data.

So, how do you communicate both services? Well, using Firebase with Google Apps Script it's a piece of cake thanks to Firebase's REST APIApps Script URL Fetch Service and Apps Script DocsList Service, let's see how simple it is:


Since Firebase's REST API allows us to get a copy of the DB with a simple URL, we can use it to get all the information directly via URLFetch, in this example we simply store the text inside a specific Google Drive folder, but its also possible to parse and pass the data to ScriptDB, Spreadsheets, Charts, Gmail, etc.

So give it a try, you don't even need to make the example to try this, you can register to Firebase and manually create a DB and some dummy data to work with, Firebase Web interface it's extremely easy to use so you won't have any problems with this.

Promising huh? well, if you want to do some serious data processing, you can use Bigquery or Fusion tables to put all that data together and watch how it evolves through time, also, if anything ever goes wrong, simply upload your most recent backup from drive, Firebase makes this very easy giving an "Import JSON" option.

Looking for a way to communicate your brand-new and super-cool Firebase live DB with your client's "traditional" web server?, you can use Apps Script XML Service, Apps Script JDBC Service, or pass them to ScriptDB and to have a frozen copy that responds to GET or POST requests via Apps Script Content Service, this one it's very usefull to make Charts on web pages using Google Charts Javascript API and some JSONP.

Just keep in mind when you are working with Firebase to be very carefull with your quotas (they are meant to last a month), and use whatever technique you find appropriate to reduce data consumption.

Feel free to let us know your ideas and opinions about this platforms at the bottom of this article and stay tuned for more articles about this great technology combo.

Happy coding!

August 7, 2013

Ways to share anything with anyone with Gdrive public HTML hosting!


Looking for sharing your files with the world, but just sharing Gdocs isn't good enough for you? Don't worry! Google Drive has a special service called public HTML hosting, which allows you to transform any Gdrive folder into a virtual "Web Server"! When others access the page, they see a list with all the folders public content (by default everything), if they click on the file, if it's not something Gdrive can open directly (like images or docs) the users browser will simply download the file.

Let's see an example, this Gdrive Folder contains a .rar file, since Gdrive cannot open this kind of files directly, when you click on it, your browser will start downloading it:

https://googledrive.com/host/0B_RClkFMLkcpdXlOZWdwM2JuWUk/

You can even put it inside an iframe to show its contents directly on a website!



Sweet huh? The best is that the content keeps in sync with your Gdrive, so if you change your content, the public version will be instantly updated! You can even host webpages! just throw some index.html file on that folder and when the users open the folders URL, they will receive the content of index.html instead of the default folder view.

But that's not all you can do... if instead of using the folder's URL you point to a specific file, you can use the URL to share that specific file with someone, and when they open it, the file will start downloading automatically (except for docs, html and images)... but hey, that's not the cool part! you can use this specific URLs to stream multimedia content for you webpages! You don't even need to worry about a player, you can use HTML5 video or audio tags and show your content straight away!

This works even for large videos, here is a video of one of my cats (he's name is Volt) playing zelda:


Right on huh? But no, you cannot have my cat... but you can host your pet's video and share it with us in the comments section at the bottom of this article!

To make things easier, I also added a copy of a very nice HTML5 video converter for windows (there is a version for Mac OS users on the authors site), it does a great job getting your video ready for some HTML5 action.

Just remember, you are responsible of what you share on the internet, so be careful with how you use this tool, never distribute proprietary material like music, movies or videoclips.

Happy coding!

July 26, 2013

How to send data to Spreadsheets using Apps Script UI.

@reicek

One really cool thing about Google Spreadsheets is that you can effectively use them as an extremely user-friendly cloud Data Base for your WebApps.

Unlike a normal Data Base, spreadsheets can pre-process the information it receives and automatically keep calculated values updated (like total, averages, etc...) they are extremely flexible and with proper planning they can be a great alternative in small-medium applications.

Tip: You can programmatically create new spreadsheets to keep the information light and organized (remember Spreadsheets has a limit quota of about 400,000 cells), its like having a new Database each week/quarter/month. You can even have an extra one that keeps the totals for each year (storing only totals taken from smaller Spreadsheets), and on top of that, a "Global" one that keeps the totals of all the yearly ones... And by being stored that way, creating reports and charts its very easy.

Extra tip: For the "Central" Databases, use Spreadsheets for simple Data, Fusion for advanced central data or Big Query if you need fast processing on very, very large amounts of data.

The easiest way to store and administer the information flow to a Spreadsheet its via Apps Script, and the easiest way to send information to Apps Script it's using UI services. For more advanced applications I suggest the use of servlets, which are basically small Apps Script programs acting as very specialized servers to query, process and deliver specific data to and from your Webapp (using the Content Services) or even JDBC-compliant data base (Google Cloud SQL, MySQL, Microsoft SQL Server, Oracle, étc.) using Jdbc Services, you can call it a "trusted-robot-in-the-middle" approach to make things easier and faster.

Advanced Professional Tip: Properly using Servlets along with your usual JDBC-compliant Database it's extremely useful since it lets you greatly improve performance because Apps Script Servlets has fast, direct access to all Google Services and once authorized, they don't need to deal with oAuth every time they make a request, which is very convenient. Also, when Google makes updates, you don't need to update your server's code to regain functionality, only the servlet and you can do that from virtually any modern browser (even some mobiles!). Also it's easy to identify the servlet that needs updating. Since the output from the servlet will remain constant through time (any text array like JSON or a binary blob), you don't need to worry about updating your servers code. Also, all this parallel processing will greatly reduce the load on your server, saving you a lot of money on hardware and bandwidth. Additionally, Google Spreadsheets, if published, can provide direct RSS or ATOM feed.

So, first let's Manually create a Spreadsheet, for this example I created one called "spreadsheetDatabase" and then I set it's access to "public on the web" so it gets a public URL, you can be more restrictive and allow only specific users to interact with the spreadsheet. For now, you only need the "Key-ID" of the spreadsheet, you get it from the URL in your browser while you have it open, in our example, the key is the part highlighted in red.

https://docs.google.com/spreadsheet/ccc?key=0AvRClkFMLkcpdDZ4VHlGd016cTQ1dUo4LXVaenhYSWc#gid=0

Now we need to pre-fill the spreadsheet with the name of the fields we want to use, it's not necessary for the script to work, but will make your Spreadsheet DB as readable as any normal spreadsheet, it can also be used to automatically apply headers to charts and reports based on the Spreadsheet. For our example we will only need a field for "name" and a field for "comment", its a best practice to delete all the unused rows and columns from your spreadsheet, so it stays as short as possible. Our example will look like this:


So far no sweat right? Ok, then let's hit the code!

So first things first, you need to create a new Google Apps Script project. In it, we are going to need 2 functions, one that's going to create and deliver the Webapp's U.I. and another one to send the data to Google Spreadsheets.

Let's start with the U.I, for this we are going to be using Google Apps Script U.I. Services, our code will look like this:


Note that sendData it's the name we choose for the function() that we will use to send our data to our spreadsheets, let's see how it looks for our example:


Now let me explain a little more about what's going on that last piece of code... First the e you see on sendData(e) is called an event object, we use it to pass information from one function to another.

We start with the .insertRowAfter() method to add a new row to the sheet (so we can keep it always using the exact number of cells we need).

Then we use the .appendRow() method to insert the data carried by the event object (e) data into the new row.

Then we open a Popup telling the user that the information has been sent, this is very important both, for the user to confirm that the information was sent, and secondly to give the Spreadsheet time to process properly the new row.

Important Note:  In this scenario, Spreadsheets are not very good at handling multiple insertions at the same time, ScriptDB and (and JSON) it's a better alternative as described on this other article.

Now let's see it all together:


Tip:  To save you the trouble later, go to File -> Upgrade authorization experience...

Now let's give it a first run so it get's your authorization to access Spreadsheets.

And finally it's time to deploy the app, to do so go to: Publish -> Deploy as Webapp...


Since it's the first time, you are going to need to Save a new version (you can give it any name you like, tip: try to be descriptive with your version names).

Then you need to say the identity the app its going to use when making requests, in this case it need to me as me so the information is sent on my name (and with my permissions!), if you choose that the App is run as the User accessing the app then the spreadsheet will either need to be publicly editable, or only users with edit rights could use the Webapp, pretty convenient huh?

And last but definitely not least, we need to set up who has access to the app, Only myself works great for private apps and beta testing, Anyone means that any user with a Google Account can use the App, this is important if you need to know the user's identity or if your App requires to act on behalf of the other user and finally, Anyone, even anonymous in this case the App will need to run as you to get access rights to Google Services (or you can use a publicly editable spreadsheet).

When you hit the Deploy button you'll get 2 different URLs, the Current webapp URL which is the published version of your webapp, this shows the code saved under the Project Version tag, and a special developer link were it says: Test web app for your latest code, which runs the current version of your script code, this way you can make (and save) changes to the code, and once you get the desired behaviour, you save a new version and change the published version.

The user's will get an instant update of the code when you change the published version! Something went wrong with the latest update to your code? Don't worry, you can always go back to a previous version until you make the necessary changes to the code and save a new version. If necessary, you also get a special link to disable the Webapp, so you can quickly plug it on and off if necessary.

And now its finally time for our first run! Let's take a look at it:


It's probably not so flashy, but it works like a charm, plus you can always use HTML services (for advanced users) or embed the Webapp on a Google Site to add some styling.

Why don't you try out our example and leave me a comment? Here you can see the spreadsheet that stores all the data, remember it gets updated every 5 minutes, so feel free come back later to see your comment on this post!


Remember that besides the embeddable iframe you saw before, Spreadsheets can also be served as (click on the name to launch that versions feed!):









So, I hope you liked using Spreadsheets as Database with Apps Script, feel free to post your doubts, comments, ideas, etc. at the bottom of this article.

Happy Coding!

July 25, 2013

How to easily embed (iframe) Google Docs on your Web Pages!

Google Docs are pretty neat, they save as you type, you can easily share them with other gDrive users and even use them as a collaboration platform with up to 10 people working on it at the same time!

But sharing a Google Doc can go beyond simply sharing it with other gDrive users, if you go to:
FilePublish on the Web...


And click the Start Publishing bottom you'll get a Document Link and an Embeddable Code.


The Document Link its basically a special webpage that shows the contents of your Google Doc, and it gets automatically updated every 5 minutes! So anyone with the link can see it on their web browser, no extra software needed! And don't worry, unless you specifically allow public editing, no one can edit or delete your document.

This is very convenient, and to make it even more convenient, daddy Google is a big spoiler and added a few gadgets at the bottom so you can easily share your new and super cool Doc on a Webpage via G+, Gmail, Facebook or Twitter and save you some precious time!

But if you feel this is way too common user for your standards, there are more fancy ways to use this feature so you can use them embedded on your Webpage or Blog thanks to iframes, you can use this to show your Doc in 2 ways, with or without the "embedded" tag:

This is how it looks without the embedded tag:


Note that when the embedded tag isn't active, the background of the document its transparent and it shows the Doc's name on a speacial header and a footer from Google.

And this is how it looks with the embedded tag:


Note that when the embedded tag is active, the iframe height only affects the container, but not the document itself, so a scrollbar appears in case the document needs more space, the advantage is that you can use it without the document's title and footer.

Feel free to change the width and height to best fit your needs, you can also choose to use it with or without a frameborder.

So, what's the advantage of using an embedded Doc inside a webpage? Well they are great for content you wish to be able to update easily, it's way easier to update a Google Doc than a normal webpage, you can even do it from your mobile... also you can choose to allow specific people to be able to do edit the Doc and the copy on your site will get updated automatically. You can even choose to place different documents on a single page, each one with access to different users and you can even add some that can be edited by anyone!

So all this dynamic content doesn't sound appealing enough for you? Well, you can add some CSS3 goodness to your site and can make those Docs look like announcement boards with some cool animations and/or 3D effects... Also remember that with CSS3, all this special iframes can get this effects automatically, you just need to plan your code carefully.

Use Google Draws instead of Docs and you might find a new way to create easily updated banners for your pages...

Hope you liked this article, feel free to post your comments, thoughts or share your ideas at the bottom of the article. Happy coding!

July 24, 2013

A developer's adventure at Google IO 2013 -part II-

by +Mauro Solcia (Smokybob)

Part II

Google Chrome is growing in usage and features, new picture and video formats/API were introduced this year with WebP and VP9, high quality formats with new compressions to provide same quality with less bandwidth usage; but the main focus is in to make Chrome an ecosystem where developers can build Packaged Apps with the same web technologies used for normal online web apps but with additional features to be able to work offline and with direct access to the hardware while being OS independent.

The focus on the Packaged Apps is supported even more by the fact that all the IOers got a free Chromebook Pixel that is sold for 1300$ way more than what we payed for the conference.

As anticipated before, many were not so happy about this gift; many said that is not possible to use is as a developer machine if not for web development and even there is difficult; that it’s only a browser and even as an “advanced consumer user” it’s limited.

This is all lack of knowledge!

Many Googlers and a some IOers that use Chrome at 80%+ were gladly showing others that they were partially right about Development, but that there are a lot of alternatives already in the Chrome Web Store and some are in development.

For Example at the time of the I/O there wasn’t an offline IDE, a couple of week after that one popped out at an Alpha stage but stable enough to build a simple packaged app.

WeVideo and other Web Company had sandboxes to show the power of their webapps on Chromebook Pixels

At the end of the I/O many took back their statements and are now really happy with the Chromebook Pixel.

I was happy the first day and helped out anyone with their “migration” and after a month I keep getting happier about it, and about Chrome OS.

This were the news for Developers shown at the KeyNote, but this are only something near 50% of all the new features for developers.

Now the “consumers” new features.


41 new features released in a single batch!

The bigger ones were the following:


  • Full Redesign: From a 1 column to a responsive design, with up to 3 columns and the posts are now more like Google Now’s Card.


  • Photo Autoenhance: using the power of Google’s servers, every photo we upload get’s enhanced but we can switch back to the original anytime we like. As a casual photographer I really like this feature; my photos are crispier and more beautiful, with lights right and more colorful.


  • Photo Autoawesome: if we burst shot photos of the same place, Google Plus is able to understand it and create an awesome .gif for us with the photos we've taken.



Finally “one app to rule them all!”.

Google Talk, Hangouts and Google Plus Messenger are now one app with additional features while still keeping the specific ones from the single apps.

Sadly XMPP protocol is discontinued and the apps (like Pidgeon or imo) that supported Google Talk now have to rebuild part of the code.


New design more “cards” like the ones in Google Now, and the new feature All Access that enable the access to all the Music in the Google Play Store for 9.99$/month and to create radios starting from an Artist/Album/Song.

Unfortunately for now is an US only feature, there are some workaround to use it from outside US, in fact I’m using it heavily even if I’m Italian and currently in Italy.


If you have an android device you might have already tried this experience with Google Now, at the I/O was presented and last week was available in a lot of non US country.

Now we can go to google.com(or .it or whatever) and start a voice search, after that the search will keep listening for other voice inputs and search again for us.

Additionally the use of knowledge graph is implemented, this means that if I ask Google “Show me the photo I took in San Francisco” it will show me a result with a link to all my Google Plus Photos that I took back in San Francisco.


An update to Google Maps web version can be previewed at the address http://maps.google.com/preview . After being enabled to use the preview version.

This new version use WebGL technology to show every content obtaining an astonishing rendering speed and optimizing bandwidth usage.

The new layout is similar to the Google Maps App on IOs.

The map is now “more personal”, the search result use Knowledge Graph information to display first the information more relevant to us and to what our Google Plus friends prefer.

For Example if I search for a restaurant nearby, the first results are mexican restaurants because I was on a lot of mexican restaurants lately and some Indian ones because my friends have rated indian restaurant high in the last few weeks.

Another cool change, I've tried in SF, is the new way that public transportation is displayed.

After having results with public transportation, you can “drill down” to a more detailed view where you can see the schedule time and different waiting time for the proposed alternatives.

A couple of Easteregg or at least features that have, in my opinion, no real use, are Realtime Clouds and Space view.

To see those two, I have to admit they are really cool, you have to switch to heart mode and zoom out until you see the full globe.

If you watch in the background of the globe the stars and planets are in the right position, you can then rotate the Earth and see the day and night line as well as the different constellation changing.

The clouds you see on the globe are in near real time, you can zoom in a little bit and still see them on your region; in realtime enough to be a cool feature but I’ll never try to make a forecast out of them :-)

With this, the news from the KeyNote, have ended, but more developer related stuff was presented the other days.


Google Glass had a lot of sessions related to what can be done now and what to expect from it.

I personally have skipped all of them and watched some of them the next week, BUT I WAS ABLE TO TRY THEM!

My experience

It was a brief experience, I had them on for 10 to 15 minutes, so I can't really say how they feel on a daily basis, but I can say they are better than I expected them to be.

First of all forgot all you know about the way you interact with the technology and the way a device provide you back content, then you can start appreciating Google Glass in it’s entirety.

I wear prescription glasses but only for the left eye, and mainly to not overstress the right one that try to compensate for the left, so I had no problem using the device.

They are light, even with the battery and all, they weight less than my prescription glasses.

As soon as I get the device on my nose, I try to watch at the “screen” and moving my head up I accidentally turn it on, having that well known Clock card in sight.

Behind it I see the Googler that is guiding me in the Glass Experience and I can simply switch my focus to her and the card is still there but it’s not in the way of my sight, plus the image look's greyish with opacity at 30%, but when you focus your sight on it is more like 90% so you can see the card clearly.

First my guide tell me to try the Voice commands: Ok Glass, Take a picture... nothing happens, another try a little bit louder... nothing again, one last try even louder, enough to be heard from people at 5 meters in loud environment like the Google I/O, and then a picture is taken.

Sadly I couldn’t share it with my account as I was in guest mode and you need to set up the sharing contacts with the MyGlass App.

Let’s switch back to the touch commands; it feels a little bit unnatural at first to swipe from back to front to get the last timeline card, I think it’s mostly because our brain translate the gestures as if we were doing them on a phone; after a couple of minutes you get the hang of it and the “Oh crap wrong gesture” drops to nearly none.

After playing around a bit with the menu, it’s time to try the bone conducting sound and watching a video on the tiny screen.

The sound is low quality, but just right for a phone call or a podcast, not for listening to music, it’s loud but not as much that you can’t hear to the person in front of you talking with you; if needed you can close your ears and the sound is a little bit amplified.

The video plays well and it’s clear, but I can still see behind it; in this case it’s a bit difficult to focus on what is behind it while was rather simple while a photo was shown.

Sadly I was so excited that I didn't think of trying Navigation directions but a friend tried it the day before and told me it’s like with normal cards but when needed the card appears a little bit more “dense” to recall your attention on the next turn you have to do.

Privacy concerns

Taking a photo with voice is quick but anyone can hear you nearby, so no stealth mode.

Using the touchpad is done by: 1. wake up the device, 2. access the menu, 3. go to the “take picture” menu item, 4. Aim and shoot; if you are fast you can do all this in around 2 or 3 seconds while the screen is on and anybody can see you looking up and swiping your temple like a mad men.

There is a quick photo button, but after the photo is took the screen come up with the photo, not so stealth.

Anybody can be a lot more stealth while casually playing with it’s phone.

Take your conclusions.

Prescription Lense

We were reassured that Google is working with some big names in the prescription lense business to provide a version for people with prescriptions (I think the first batch of customer will be 70% with glasses), at the I/O I’ve seen a couple of model, one round a one more squared, but one thing's for sure, we'll need to buy a specific version with our prescription lense because the frame is part of the device itself.

Some hacks are in progress to mount the Explorer edition on existing pair of Glass, let’s see what will be our opinion when they put the device on the market for everyone.


Not a lot of new features for Google Drive, but the one presented opened up a lot of possibilities.

First of all the hugely requested ability to access document format content from Apps script, and now that we have seen the API we know why it took such a long time.

The Document API make possible to access the data in a document in DOM style with child items and a structure as complex as our documents can be, with formats (bold,italic, font size, etc..), images and other “embedded” objects (like Drawings).

For more info you can see the Session I attended and the Official Documentation.

Another interesting feature that I’m happy about and I'll be using next week for work, is the dynamic creation of Forms and the new update to the Forms.

Sadly I have yet to take a deeper look at the Session and Documentation but I've talked with a couple of Dev that worked a bit on it and they were pretty excited.

Feel free to leave your comments at the bottom of the article.

June 15, 2013

A developer's adventure at Google IO 2013 -part I-

by +Mauro Solcia (Smokybob)

I was lucky enough to be able to attend the Google I/O conference this year... This year the conference was less about “incredible wonders” for the general public, and more focused on the tools Google provide to the Developer to build incredible wonders and attempt at “moon shots”. Some new cool features “consumer side” were presented, but nothing as big as last year’s Google Glass presentation.

At first, many at the conference were a little disappointed especially when they told that they were going to give us the Chromebook Pixel as many were anticipating something like a Nexus 5 or a new Tablet.

Some people called the Chromebook Pixel “A browser on a good HW”; while this is in part true, this statement show that some attendees did not use Chrome at his full potential with all the Apps and extensions available in the Chrome Web Store.

Google Glass session were always so packed that you had to go at the presentation before the one you were interested only to take a sit; after the first day many choose to follow the streaming from outside or, like me, to go at other sessions and watch the Google Glass Session the next week.


It was my first time in a big conference like this, when you are waiting for the keynote to start everyone is whispering with other people at what could be presented in moments, this year a lot were wrong.

I’m not going to list all the things that were presented during the keynote, I think many of you already saw the video in streaming or the day after, so I’ll be focusing what I and the people near me felt about some of the new products/features.


A lot of Android Developer were at the I/O and this new feature set was well accepted because it provides a simpler and more optimized way to access Location information.

I have no direct experience with location based Android Apps, but talking with some developers with experience in this field and they confirmed it was a lot of work to balance good location informations and battery consumption.

The best thing about this new APIs is that they are integrated in the Google Play Services, so they are available from Android 2.2 +, we are talking about 80 or 90% of the active devices.


Here the big news are 3.

The first is Upstream messages, now the device can send messages to the server with the same service the server can send the messages to the client; this solve a lot of problems, we don’t need a dedicated service and port to send data to our server and rely on the Google Cloud Messaging only for “heads up” from the server.

The second one, is that the notifications are now synced across the devices, because now the cloud messaging queue is associated to a user id and not to a device, or at least the developer can now have different devices to get notified for the same “queue”.

The third one was not shown in the keynote, but it’s well explained in this session, and is the support of the XMPP protocol that enables application to open a persistent channel to stream messages in nearly real time with our server but leveraging the Google Cloud Messaging.

I’m personally really happy about the last one, as many big clients are not so happy to open ports and authorize external addresses on the firewall, but it’s a lot more simple to have them authorize a single address with google in it and then use the same address for all the different applications we are going to build.


Weeks before the Google I/O, after the My Glass API teardown some news about this new API leaked, but we only supposed what the API was capable of.

The new API is a full infrastructure to manage gaming on Android, from the ability to store scores and progress so that the user can play our game on his phone and then continue on his tablet when it’s back home, to multiplayer management with a lot of problems around it managed by the API.

I’m not a game developer, but some of the people I was roaming around the Google I/O are, and we discussed a lot about the problems behind the multiplayer management, connection, synchronization between devices, etc... a lot of code and a lot of headache for the developers and the server managers.

There were not many session about this but there is one very informative that can be found here.


After seeing that Google made an optimized IDE based on Eclipse for Dart, I was hoping to see a counterpart for Android and here we have Android Studio based on IntellyJ.

The full introductory session can be found on the Google I/O Playlist here.

I have yet to try it fully, there are a lot of changes; some at the core (the new building system Gradle), some other to simplify the development process (new UI templates, new plugins, etc...).

If you already developed for Android with IntelliJ, I was told, you feel at home with new shiny features; if you developed with Eclipse, you feel the IDE is faster and lighter but it’s different and you need some time to adapt to it.

The main problem I've faced is when I try to import apps written using Eclipse, if they are really simple, the import works flawlessly, if the app get’s a little bit complex, some things doesn’t work E.g. Action Bar Sherlock implementations need to be kind of rebuild from the ground up.

My personal advise is to branch your current apps and start working with the new UI, try a quick fix and current implementations should be done with Eclipse, but a full switch by the end of the year is a good thing as some features might not be there in the ADT plugin for Eclipse in the future.


New really anticipated features were added to the Google Play Developer Console, for those that don’t know what it is, that's where developers manages the published apps.

There are 3 major features:

Alpha and Beta versions: Now we can add alpha and beta version and release them to a set of trusted users.

Staged Rollout: Right now, when one deploys a new version, every user gets the new version as soon as possible, when you have 1,000 active users it’s not a big problem, but when you have several thousands and the app connects to your server, it could be a problem. Now we can release the new version to a percentage of users and after some time increment the percentage until you are confident that nothing will break.

Optimization Tips: Using all the data related to the app, the developer console can point out some area of improvement for our app, from the Tablet support, to adding a specific language due to the growing installs on a specific region of apps that are similar to ours.

Other features added to the Developer Console are:

Metrics: It’s kind of an extension of Google Analytics with a specific target on Android Apps.

Revenue Graphs: Better improved graphs for sold apps, with new details to help developer improve their revenues.

App Translation Service: from the developer console we can buy professional translations of our app strings.

Stay around to read the next part of the A developer's adventure at Google I/O 2013 were you can read about:

  • Google Chrome updates.
  • Google plus updates.
  • Google Hangouts.
  • Google Play Music.
  • Google Maps.
  • Google Glass.
  • Google Drive & Google Apps Script.