by +Mauro Solcia (Smokybob)
Google Chrome is growing in usage and features, new picture and video formats/API were introduced this year with WebP and VP9, high quality formats with new compressions to provide same quality with less bandwidth usage; but the main focus is in to make Chrome an ecosystem where developers can build Packaged Apps with the same web technologies used for normal online web apps but with additional features to be able to work offline and with direct access to the hardware while being OS independent.
The focus on the Packaged Apps is supported even more by the fact that all the IOers got a free Chromebook Pixel that is sold for 1300$ way more than what we payed for the conference.
As anticipated before, many were not so happy about this gift; many said that is not possible to use is as a developer machine if not for web development and even there is difficult; that it’s only a browser and even as an “advanced consumer user” it’s limited.
This is all lack of knowledge!
Many Googlers and a some IOers that use Chrome at 80%+ were gladly showing others that they were partially right about Development, but that there are a lot of alternatives already in the Chrome Web Store and some are in development.
For Example at the time of the I/O there wasn’t an offline IDE, a couple of week after that one popped out at an Alpha stage but stable enough to build a simple packaged app.
WeVideo and other Web Company had sandboxes to show the power of their webapps on Chromebook Pixels
At the end of the I/O many took back their statements and are now really happy with the Chromebook Pixel.
I was happy the first day and helped out anyone with their “migration” and after a month I keep getting happier about it, and about Chrome OS.
This were the news for Developers shown at the KeyNote, but this are only something near 50% of all the new features for developers.
Now the “consumers” new features.
41 new features released in a single batch!
The bigger ones were the following:
- Full Redesign: From a 1 column to a responsive design, with up to 3 columns and the posts are now more like Google Now’s Card.
- Photo Autoenhance: using the power of Google’s servers, every photo we upload get’s enhanced but we can switch back to the original anytime we like. As a casual photographer I really like this feature; my photos are crispier and more beautiful, with lights right and more colorful.
- Photo Autoawesome: if we burst shot photos of the same place, Google Plus is able to understand it and create an awesome .gif for us with the photos we've taken.
Finally “one app to rule them all!”.
Google Talk, Hangouts and Google Plus Messenger are now one app with additional features while still keeping the specific ones from the single apps.
Sadly XMPP protocol is discontinued and the apps (like Pidgeon or imo) that supported Google Talk now have to rebuild part of the code.
New design more “cards” like the ones in Google Now, and the new feature All Access that enable the access to all the Music in the Google Play Store for 9.99$/month and to create radios starting from an Artist/Album/Song.
Unfortunately for now is an US only feature, there are some workaround to use it from outside US, in fact I’m using it heavily even if I’m Italian and currently in Italy.
If you have an android device you might have already tried this experience with Google Now, at the I/O was presented and last week was available in a lot of non US country.
Now we can go to google.com(or .it or whatever) and start a voice search, after that the search will keep listening for other voice inputs and search again for us.
Additionally the use of knowledge graph is implemented, this means that if I ask Google “Show me the photo I took in San Francisco” it will show me a result with a link to all my Google Plus Photos that I took back in San Francisco.
An update to Google Maps web version can be previewed at the address http://maps.google.com/preview . After being enabled to use the preview version.
This new version use WebGL technology to show every content obtaining an astonishing rendering speed and optimizing bandwidth usage.
The new layout is similar to the Google Maps App on IOs.
The map is now “more personal”, the search result use Knowledge Graph information to display first the information more relevant to us and to what our Google Plus friends prefer.
For Example if I search for a restaurant nearby, the first results are mexican restaurants because I was on a lot of mexican restaurants lately and some Indian ones because my friends have rated indian restaurant high in the last few weeks.
Another cool change, I've tried in SF, is the new way that public transportation is displayed.
After having results with public transportation, you can “drill down” to a more detailed view where you can see the schedule time and different waiting time for the proposed alternatives.
A couple of Easteregg or at least features that have, in my opinion, no real use, are Realtime Clouds and Space view.
To see those two, I have to admit they are really cool, you have to switch to heart mode and zoom out until you see the full globe.
If you watch in the background of the globe the stars and planets are in the right position, you can then rotate the Earth and see the day and night line as well as the different constellation changing.
The clouds you see on the globe are in near real time, you can zoom in a little bit and still see them on your region; in realtime enough to be a cool feature but I’ll never try to make a forecast out of them :-)
With this, the news from the KeyNote, have ended, but more developer related stuff was presented the other days.
Google Glass had a lot of sessions related to what can be done now and what to expect from it.
I personally have skipped all of them and watched some of them the next week, BUT I WAS ABLE TO TRY THEM!
It was a brief experience, I had them on for 10 to 15 minutes, so I can't really say how they feel on a daily basis, but I can say they are better than I expected them to be.
First of all forgot all you know about the way you interact with the technology and the way a device provide you back content, then you can start appreciating Google Glass in it’s entirety.
I wear prescription glasses but only for the left eye, and mainly to not overstress the right one that try to compensate for the left, so I had no problem using the device.
They are light, even with the battery and all, they weight less than my prescription glasses.
As soon as I get the device on my nose, I try to watch at the “screen” and moving my head up I accidentally turn it on, having that well known Clock card in sight.
Behind it I see the Googler that is guiding me in the Glass Experience and I can simply switch my focus to her and the card is still there but it’s not in the way of my sight, plus the image look's greyish with opacity at 30%, but when you focus your sight on it is more like 90% so you can see the card clearly.
First my guide tell me to try the Voice commands: Ok Glass, Take a picture... nothing happens, another try a little bit louder... nothing again, one last try even louder, enough to be heard from people at 5 meters in loud environment like the Google I/O, and then a picture is taken.
Sadly I couldn’t share it with my account as I was in guest mode and you need to set up the sharing contacts with the MyGlass App.
Let’s switch back to the touch commands; it feels a little bit unnatural at first to swipe from back to front to get the last timeline card, I think it’s mostly because our brain translate the gestures as if we were doing them on a phone; after a couple of minutes you get the hang of it and the “Oh crap wrong gesture” drops to nearly none.
After playing around a bit with the menu, it’s time to try the bone conducting sound and watching a video on the tiny screen.
The sound is low quality, but just right for a phone call or a podcast, not for listening to music, it’s loud but not as much that you can’t hear to the person in front of you talking with you; if needed you can close your ears and the sound is a little bit amplified.
The video plays well and it’s clear, but I can still see behind it; in this case it’s a bit difficult to focus on what is behind it while was rather simple while a photo was shown.
Sadly I was so excited that I didn't think of trying Navigation directions but a friend tried it the day before and told me it’s like with normal cards but when needed the card appears a little bit more “dense” to recall your attention on the next turn you have to do.
Taking a photo with voice is quick but anyone can hear you nearby, so no stealth mode.
Using the touchpad is done by: 1. wake up the device, 2. access the menu, 3. go to the “take picture” menu item, 4. Aim and shoot; if you are fast you can do all this in around 2 or 3 seconds while the screen is on and anybody can see you looking up and swiping your temple like a mad men.
There is a quick photo button, but after the photo is took the screen come up with the photo, not so stealth.
Anybody can be a lot more stealth while casually playing with it’s phone.
Take your conclusions.
We were reassured that Google is working with some big names in the prescription lense business to provide a version for people with prescriptions (I think the first batch of customer will be 70% with glasses), at the I/O I’ve seen a couple of model, one round a one more squared, but one thing's for sure, we'll need to buy a specific version with our prescription lense because the frame is part of the device itself.
Some hacks are in progress to mount the Explorer edition on existing pair of Glass, let’s see what will be our opinion when they put the device on the market for everyone.
Not a lot of new features for Google Drive, but the one presented opened up a lot of possibilities.
First of all the hugely requested ability to access document format content from Apps script, and now that we have seen the API we know why it took such a long time.
The Document API make possible to access the data in a document in DOM style with child items and a structure as complex as our documents can be, with formats (bold,italic, font size, etc..), images and other “embedded” objects (like Drawings).
For more info you can see the Session I attended and the Official Documentation.
Another interesting feature that I’m happy about and I'll be using next week for work, is the dynamic creation of Forms and the new update to the Forms.
Sadly I have yet to take a deeper look at the Session and Documentation but I've talked with a couple of Dev that worked a bit on it and they were pretty excited.
Feel free to leave your comments at the bottom of the article.