Monday, June 20, 2016

The future of AI is on the cloud

(Written by Cecilia Abadie. Originally posted by MonkeyLearn on Oct 16th 2015:
Trends are a great way to predict the future, and predicting the future gives you a chance to be ahead of the curve and make the right decisions for the future of your product or service.
Back in 2008, Nicholas Carr predicted that the future of the internet was the cloud. The reason he could do this prediction is by looking at history and understanding previous trends.
He argued that computing would mirror the shift that happened with electric power a hundred years ago. It used to be that companies had to generate their own power to run all their machines, but as soon as we had the network to deliver that power, everybody got rid of their local operations and plugged into the network. Nicholas predicted that we’d see the same with the internet as it would become a computing platform instead of just an information platform.
In his book “The big switch”, Nicholas tells the story of how Edison was the first one to build centralized utilities for delivering electricity over the network and he makes the argument that if you look at the server farms built by Google and Amazon they are very similar to the utility plants that provide power and send a monthly bill that Edison built.
Today, nobody would question the exodus to the cloud, with more and more companies moving their hosting services on Amazon, Rackspace, Azure and a variety of other cloud providers.

Artificial Intelligence as a service

Recently, Kevin Kelly, on an AMA with Tim Feris, answered to the question of “What future technology do you think would have the most impact in our lives that we don’t see coming?”. After translating the question to “What’s the next big thing that one might invest either money or time into?”, he answered: “The thing I am most excited about is Artificial Intelligence as a service, something that you plug into and get, not something necessarily that is roaming around in a robot head, but it’s closer to a web service or even electricity, where you just purchase it and then use it in your product or service.“
Kevin goes on to draw the same parallelism that Nicholas Carr made on his book: “Like electricity a hundred years ago, AI would transform everything and there are tons of opportunities to take this utility and make it useful and into business of some sort”.
He takes it a bit further and ponders what the consequences of this thought exercise would be:
“There weren’t that many big companies that were generating electricity and that made money. It was more about the companies who made appliances, services and gadgets that depended on electricity where the wealth was made. So, I think it’s the same thing, there might be only a few companies creating the AI that’s been sold, but there are a thousands of different opportunities to take that commercial grade AI that will be coming along very soon and use it to make something new and exciting that hasn’t been made before.”
Furthermore, John Henderson goes on to build on this idea and presents areas in which AI could bring a new opportunity with Kevin Kelly’s “Take X and add AI”principle. The list extends to: medical diagnostics, scheduling meetings, learning a language, journalism, recruiting and the list goes on.

Machine Learning on the Cloud

Machine Learning, one of the most extensive current incarnations of AI, is no exception to this process of “cloudization” of AI. Complex algorithms that get bettered by the specialized few and get to be used by everybody that needs them for cheap, thanks to companies like MonkeyLearn betting on the future of Machine Learning on the cloud.
MonkeyLearn is at the forefront of the shared intelligence by providing highly scalable Machine Learning on the cloud and allowing its bright community of developers to share their public modules for others to reuse.
Humans and computers working together will enable new solutions and maybe even new industries that were never possible before. Imagine the possibilities when the hard part is out of the equation and creativity and vision is all it takes. It makes me wonder, which will be the gadgets and services that using cloud machine learning will be part of our life in the future.
What could you create using cloud Machine Learning that would have been impossible to build in the past?

Wednesday, April 02, 2014

Voice is the new touch: Amazon fire TV

Last August, after three months of wearing Google Glass, talking to it and having it talk back to me, I had an epiphany, a revelation, and I wrote this post: Voice is the new touch, or in other words, in the near future, every piece of software and every piece of hardware will include or add a new layer of voice.

In the same way now customers demand an app with touch screen for cells and tablets, after experiencing this new come back to natural interfaces such as voice  customers will expect and want voice everywhere, they'll demand voice, and we developers will have to add a new layer of voice to all of our apps. 

Amazon fire TV is the perfect example of what I meant by Voice is the new touch in the level of a hardware piece.

Augmented Reality or Virtual Reality

Virtual Reality or Augmented Reality? I say both!

While the whole tech world rebelled against the purchase of Oculus Rift from Facebook, I rejoicing in knowing that the future of Virtual Reality is nearer.

In a clear bet to the future of social 3D virtual spaces Facebook bought Oculus Rift at such an early time and low price. I trust the Oculus Rift team enough to know that as part of the deal they guaranteed their independence and roadmap to continue in an accelerated way towards their goals.

Even when inmature reactions such as Minecraft's CEO started to happen, I stick to believing in Oculus Rift and the vision that Facebook is having to bet in such a future to be relevant in the future of how we live and communicate.

Augmented Reality, represented currently by Google Glass although not technically it, as well as Virtual Reality with front runner Oculus Rift (might be closely followed by Sony) are the two most exciting future trends in tech devices. Two different ways to compute, one by augmenting our physical reality overlapping extra data and the other by offering a substitute to our real life, a substitute that might become exceedingly better in some ways.

The old new issue of privacy

Since 1890 we've been talking about the need to legislate privacy in public spaces when the first Kodak Brownie Cameras appeared. 
Maybe this conversation has been going on for too long ... I think we're getting to the bottom of it.
Transparency is like entropy, it is an arrow that only goes in one direction. The only way to compensate for losing privacy is by adding more transparency, transparency on our personal lives, on our organizations and governments.

Which side feels more human?

At the end of the awesome SXSW Glass Explorers panel, we Explorers got together for a pic. Here are the two sides of the pic which really raised the question of which one feels more natural? more human? Could Google Glass help us get technology a bit more out of the way? Judging for this picture it could ...

pic subtitling by

Friday, January 24, 2014

The thing about Point of View (POV)

I grew up during the times of the unilateral media.

TV was something that was broadcasted on you. Which could translate in basically shut up and listen.

I personally quit TV 4+ years ago ... TV was the dark era of our times.

With computers we re-gained our much precious and not too long ago lost interactivity.

These videos show my press conference for the Google Glass ticket from the point of view of the media, but also from the point of view of me, the interviewee. This is the new media, you point a camera at me and I point one at you (easily and low technically) and my point of view might even beat the net and be uploaded uncut before the official one does.

So, here they are, point of view of media:

And point of view, of myself, the interviee:

Since I wear Google Glass I feel safer in a subconscious way, with the power of recording what happens around me, because all I need to defend myself in any situation is the truth, and recording is giving access to others to my truth, and that's one of the values of transparency, something one day will become second nature for all of us, we'll just gain this extra level of protection because everything digital is traceable if need be.

After a few "I wish I had recorded when the officer stopped me" and "I wish I had recorded the interview that Good Morning America recorded to contrast it with the lame coverage they did on TV", I finally got it, and am building the habit of getting my shots ready exactly when I need them.

Wednesday, January 22, 2014

My first TWiT episode ever with All About Android

This is one of those things that you always wanted to do and one day you get the message, hey! come to our show! and you're just thrilled and then you just have an amazing time, with people you super admire like Jason, Gina and Ron, so, what can I say?? just watch!

TEDxTemecula: The greatest risk of all is not taking a risk

In this talk about how being a cat the strategy of being very fearful (literally as a scaredy cat) worked for centuries but for us in the current interconnected era of information it just doesn't. What works on our days is to take risks. And I tell my personal story of taking risks in my life.

Towards the end I share a bit of my life philosophy of life being like a game, a game in which collecting all the coins is not the end goal, but going after the bonus points of experimenting and experiencing to the fullest, getting outside of your comfort zone often and sharing transparently what you believe in.

Talking about wearables with IEEE Alexander Hayes

Talking about wearables with +Alexander Hayes for an upcoming issue of IEEE Technology and Society Magazine, from Canberra Australia. I love this interview as it goes a little deeper into the philosophy of wearables and the future of technology in general.

Saturday, January 18, 2014

Free to be a CYBORG

When in February 2013, at the conclusion of the Mirror API Hackathon, Google gave me a Pioneer prism I had no idea the challenges that being a Pioneer might ever involve, but I accepted the challenge because it seemed "a path with a heart" as Castaneda would call it. A path with a heart translates for me into a path in which connection, service to others and growth are guaranteed.

Later on when I got the first ticket in the US for driving with Google Glass I fastened my seatbelt because I knew the adventure had begun.

So what to say looking in retrospective:

1. It's ok to wear Google Glass when you drive.

I now know that it is ok for me to drive wearing my Google Glass, which became a natural extension of my body, a key piece of my New Digital Brain. Or in the words I like to say it, I can continue to be a CYBORG for most of my day. This really felt like a right to me, and as we gain the ability to record everything we see and search on our extended memory in a simple and natural way (which will come soon) I would be very tempted to make the case that my extended memory should have access to everything my biological memory does. So, on this one, a big sight of relief!
I doubt any informed officer with minimum understanding of the vehicle code and average IQ or above will stop any Glass Explorer for just wearing Google Glass from now on ...

2. How about actively using Google Glass while driving?

Regarding actively engaging and using Google Glass while driving the vehicle code here is clear, we should not be operating a monitor in front of the driver with the exception of a vehicle information display, a global positioning display. a mapping display, among other. The way I interpret this is that it should be ok to use Google Glass for Navigation purposes, and with awesome apps here and to come such as Drive Safe for Glass. It seems like technically, according to the current law, we should not be using any of the rest of Google Glass features, including the awesome hands free features unless one of these is true:

"27602 (5) (B) The equipment is designed, operated, and configured in a 
manner that prevents the driver of the motor vehicle from viewing the
television broadcast or video signal while operating the vehicle in
a safe and reasonable manner."

I interpret this option as the implementation, most likely by Google X, of what would be like a driving mode that detects driving and allows the user to optionally block access to content and features not considered by the law or not safe overall.


"27602 (6) A mobile digital terminal that is fitted with an opaque
covering that does not allow the driver to view any part of the
display while driving"

This second exception would be a built-in or 3D printed (hint, hint for some Glass Explorers such as Daniel Ward) cover for the internal side of the screen which would allow for hands free use.

Of course, there's the issue of how in all practicality would an officer prove that the device is actually operative for a use beyond the exceptions on the code. Which brings us to the conclusion that new laws will need to be written if Google Glass use were to be banned (more on #5).

3. Having awesome lawyers made all the difference

Having My Traffic Guys help me pro bono was an amazing help that I'll never be able to thank Will Coincidine and Gabriel Moore enough for. It really makes all the difference both in the process and the outcome. Going there alone to represent myself would have put me on a serious disadvantage and would have definitely been nerve wrecking and with them it seriously was a walk in the park. I had all my trust and confidence on following Will's lead on the handling of the case and he delivered 100%+ while being super nice every step of the way.
Note: At the same time it really leaves me confirming one more of the asymmetries on our social justice systems which does really worry me and makes me sad for those that don't have access to proper defense not only in the US but around the world. This seems to me as a basic right to justice that even when in theory for criminal cases should be covered I am not convinced that we all get the same level of legal advice.

4. The law worked as expected 

The law worked in California exactly in the way it should have worked and this is a relief again! Luckily it's not very often that a regular person gets involved with the law system and it's a good thing that with all its pitfalls and loopholes and asymmetries it does work as it is supposed to and this case was no exception. I was worried that the officer would say things other than the truth (different from what actually happened, that is, he did said things that were not accurate but completely true from his honest point of view) or that the judge would be partial and not apply the laws as they were supposed to, in which case the case was won before even going to court. I deeply respect both Keith Odle the officer doing his job in this case (although I obviously think he's un-informed I don't think this is his fault) as well as Judge Blair for their professional demeanor.
Yes, even in the case of the speeding ticket, the law is designed to warrant the defendant's rights during the trial process and I followed my lawyers suggestions regarding my defense. I know many people are horrified by the speed ticket but in all honesty, we drive very fast in general in Southern California (when I arrived from Chicago I was scared myself) and I see way more accidents in the highway when traffic is slow and under weather conditions than when its flowing. There are studies that challenge the link between speed and accidents occurrence. I personally pay much more attention to a flowing traffic than an stagnated, boring one.

5. Technology will acceleratedly challenge laws and ethics

This incident brings to the table the overall issue of how laws are keeping up with new tech and innovation. It is very scary to have legislators writing about tech they never experienced. These are not heavy drugs that a legislator would put themselves at risk by trying, so c'mon! That being said, a great sign is what happened in UK with the attempt to introduce legislation to ban Google Glass in driving that seems to have been dismissed for now after careful review of the tech. Smart! or how my briton friends would say: Clever!

6. The Google Glass Explorers Community is amazing

One of the most amazing parts of my whole experience as a Google Glass Explorer has been consistently from beginning to end the Explorers Community, also known as #glassfamily. In a way I ended up with the delicate and un-volunteered task of being the first to represent my fellow explorers on this matter in front of the US legal system. I can just say I did my best and I am completely honored to be a fellow Explorer. You guys are the best!!

7. Being on the other side against the general public's opinion is never fun

All I can say is that there seems to be a huge information gap.
A huge majority of the people that are informed and understand what Google Glass is and how it works, and preferably those that tried it, are on one side. And then, on the other side is everybody else, including the fearful person that is fed by lame journalism.
I do notice that there are less haters at this time that when the ticket was initially announced, hopefully information is leveling up the field.

Time is on our side! In a few years we'll all be wearing visors that augment us, expand us and connect us.

I have no affiliation with Google other than being a Google Glass Explorer.

Sunday, January 12, 2014

TEDxOrangeCoast: Resistance is futile

Resistance is futile! My TEDxOrangeCoast talk video is out: Resistance is Futile: Cecilia Abadie at TEDxOrangeCoast. Telling my story as a #GoogleGlass  Explorer and the future of Transparency. Check out +Robert Scoble shower pic at 5:43min talking about how Transparency is anti-fragile (a la Taleb). 

Link here:

Thursday, August 08, 2013

Voice is the new Touch

We saw it happen many times before, Windows is the new DOS, Web is the new Desktop, Touch is the new Web and now we can start to glimpse the next one: Voice is the new Touch.

All of these technologies still exist, each of those phrases doesn't state the end of the previous one, but the addition of a new layer that impacts everything we'll do and see in the future. Some people tried to kill the web a while ago ... and they were very unsuccessful at that ... so we shouldn't try that again. Touch will live a long life. We're just saying Voice came here to stay on top of Touch.

What will be driving this new paradigm shift?

It clearly started with Siri, although it didn't have the strength to do the whole click, it started something big.

For me personally, Google Glass caused the major shift.

Others will see it with the Moto.

Less early adopters will see it when cool voice applications start to come out on an android or iPhone near them.
And the curve of adoption that I mentioned a few times before, will follow it's typical shape.

My aha moment was a week ago, as I was sitting on a meeting table and someone wanted to know how many simulators we had from a particular brand store in a certain area. After being using GoogleGlass for three months, it came as second nature to me to think wouldn't it be nice to be able to say: "ok glass, what is the number of simulators sold in Canada during the last quarter?".

And that was a realization, not only it would be cool to be able to ask my custom database that question, it will be a standard thing to do one day. The layer of voice will become pervasive. Once the users start using voice for a few simple things, it will be unstoppable. Users will want more voice commands and queries. Developers will want to enable voice on their apps. Voice will be the new Touch.

Wow! I was shocked! and inspired!!

Couple days later I was having meetings to discuss Glass integration for vertical markets. My first one was with EverMed, talking about bringing Glass to Medicine. Helping the doctor do her rounds and prescriptions with Glass. Any other day before my aha moment I would have been thinking exactly that 'How to bring Glass to Medicine?". But after that moment in which I realized Voice is the new Touch my vision was expanded. I mentioned to Christian Saad and Thomas Schartz that we at 33 Labs have a voice/touch workflow engine called "Oktopus, get up to speed".

They just loved the idea. That is exactly what a doctor needs! said Dr. Saad.

Everything was starting to click.

So, what if in our next meeting, bringing Glass to Agricultural production we could port the same concept. How about using Oktopus as the voice/touch workflow engine, integrating android tablets or cells and Glass into a workflow for Agricultural Producers and Agricultural Engineers?

Gabriel Medina, specialist developing software for this vertical for the last 30 years loved the idea. Eduardo Blasina, Agricultural Engineer loved the idea as well.

They all got it and they even started writing scenarios on word documents for collaboration. It was a natural fit!

What I see in the future, is this huge paradigm shift (one more time) that will make every little app have to have a voice component.

We see databases enabling a natural language query of some sorts.

Business Intelligence cubes being accessed by voice.

Imagine the possibilities for people with sight issues, some of which are already happening, as well as people with mobility issues as it's already happening as well.

It really impacts every piece of software as we know it today.

I'm not talking about the future on 10 or 20 years, I'm saying this is a revolution that is about to happen as the moto goes out in the streets and gets in the hands of users, followed by Glass coming out to the consumer beginning of next year. This will happen fast, in accelerated mode as we're getting used to by now.

As for 33 Labs as a company, we'll keep getting ready for this vision to be fulfilled.
We believe Oktopus our voice/touch workflow engine will be a key component of this shift.

Mark my words: Voice is the new Touch.