Saturday, December 22, 2007

Happy Holidays!


Click in the image for our Happy Holidays Message!







from the creators of the best free toy ever the rubber band :)
This guys marketing rocks!


[Thanks to Adriana for the fun link]

Monday, December 17, 2007

Accelerated genetic human evolution


This BBC article [Thanks to Ed for sending it to me] writes about a study that tries to demonstrate that the human evolution is accelerating in the genetic level. I find it very nice as a reality check for those who don't believe in evolution.

On the other hand it seems to me like we're past the point of natural genetic evolution. After watching a genetically modified mouse walk fearlessly towards a cat as a result of parts of it's olfactory bulb getting disabled, just to cite one of the recent wonders of genetic manipulations, you can see it coming fast ... Our pace with gene variations through genetic procedures will be so accelerated that we could say reproduction is losing its primordial goal in the human race (too movie like but not less true because of that).

Thursday, December 13, 2007

How to monetize a website?


These are the basic models of revenue I can think of analyzed from the perspective of a regular web site offering horizontal or mostly vertical content:

1. Subscription based services.
This is basically the membership approach. Members pay for a monthly or yearly membership. There are some options as far as the membership. There are sites that will provide free membership. Other option is the unique fixed membership fee. Another approach is escalated memberships with different benefits and fees (gold, platinum, make it simple ;). The fourth option, which I'm a big fan of, is a mix called freemium, which would offer a free membership providing basic services and a premium membership with a fee and exclusive services that only premium members can get. The freemium option usually gets nicely complemented by advertisement on the free areas of the service.

2. Pay per service. This is the model of the pay per view in the cable industry. Users pay as they consume a product or service. For instance, the member might have a free membership and pay to play one round of golf or one tournament, etc. As part of this model there's a new approach "it's up to you" started some time ago in the music industry by Radiohead. Also, an interesting approach to this model was eBays with commissions applied to services/product sold by others.

3. Advertisement based models. Users have great acceptance of non-intrusive publicity that finances their free services. It’s been a winner for multiple companies to make money where many others failed before (including Google).

4. Merchandise store. As you get traffic to a site, one revenue model usually used in addition to others is to have a merchandise store related to the site users' interests.

Then of course there are combinations of all of them, which I really think is the way to go.

Tuesday, December 11, 2007

Release software updates on Tuesdays, never Fridays


My uncle went to surgery recently here at La Jolla. When we spoke on the phone before the surgery he commented to me that the surgeon liked to make important surgeries on Tuesdays, so people could come in and get ready on Mondays and they would have many weekdays ahead before the weekend in case there were complications.

I thought there's something to learn from this surgeons for us in the software industry. So, from now on, unless something is totally broken and we need an immediate fix, our software updates will go out on a Tuesday. Friday they usually be ready, Monday we'll do the last testing and Tuesday out they go!

In reality we're doing it on a Wed this time, couldn't get to the Tues in time ...

Anyway, it looks like it'll save lots of worries to both our clients and our tech support department :)

Monday, December 10, 2007

Chimps beat humans in memory tests

Japanese researchers have revealed the results of a fascinating study in which chimps beat humans in memory tests:



If I had to find an explanation I'd think that humans added too many programs in a relatively short period of evolution which ended up screwing our memory in favor of other traits. I kind of was impressed with the memory my 2 years old had and it seemed to fade away as she "learned" more social and specific behaviors.

Sunday, December 09, 2007

Communication evolution


Human communication is evolving, and it is evolving in an accelerated way.




I came across a post that summarizes the milestones like this:

Age in Years
(2007 estimate)
Homo sapiens~200,000
Language >50,000
Writing 5,000
Telephone 131
Broadcasting101
E-Mail 25
IRC 19
Texting 15
IM 11
Blogging 10
Twitter 1


Can you feel the acceleration? It really makes you wonder what is the next thing and when will we know about it? Also, it seems to be reaching some type of limit as no technology can be adopted in less than 1 year, or can it?

The article measures for each communication type three variables: immediacy, audience and lifespan and analyzes the holes trying to predict/invent the future new communication channels.

Once a paradigm is reaching a limit like this a new paradigm is born. We might soon see the beginning of a new paradigm: brain to brain communication through BCI (brain computer interfaces), something that is already out there ...

Saturday, December 08, 2007

One laptop per child


I just signed-up on the Give One Get One program of the One Laptop Per Child (OLPC) program.

Sometime in early 2008 my daughters will get one laptop each and what's more exciting two little ones in Afghanistan, Cambodia, Haiti, Mongolia or Rwanda will get one as well in the same early 2008 timeframe.

Unfortunately they will not make it by Xmas, but any day can be Xmas with a laptop :)

What we eat in one week around the globe



Interesting pictures showing what families around the world eat in one week ...

The source is the book Hungry Planet: What the World Eats, you can see multiple pictures online on this forum.

Friday, December 07, 2007

If the web was a democracy standards would rule


If the web was a democracy (I don't want to say the world) we'd have standards for everything: openID, open social networks, open handset alliance, open source movement.

Standards always benefit users, I believe they do benefit all of us across the board as they allow technology to progress and become pervasive.

Take an example of a world without standards like the cell phone industry and you will only see stagnment and user restrictions and limitations. CDMA, GSM, TDMA restricting which phones you can use with which wireless companies or which country. Palm, Windows Mobile, OS X, limiting software mobile software development. Web browsers are not appropiate enough or strong enough in this platforms to fill in the standard gap for the industry.

On the other hand, there's tons of great examples as far as where standards can take us. What made the web what it is today? The web become a platform in itself and if you wonder what's the web after all? It's just a bunch of standards TCP/IP, HTTP, HTML, XML, etc.


What user on their right mind would deny the advantage of having unique identification (like OpenID)? In the wild world today the adoption of standards depends on some many factors. Mostly, what big players are doing according to their own interests. I know this might be controversial, there's many interpretations as far as what raises to the top, such as the hive mind. I don't want to get political with the W3C, but maybe some day we can have some type of organization that optimizes the way we think, create and why not? vote for our standards. Then, the industry would follow what is best for the users which again, in the end is the best for all of us.

Thursday, December 06, 2007

We don't know what a machine is anymore


I was reading a great post on important differences between brains and computers. It is very interesting as it summarizes the current understanding of what a computer is and what it can and can't do. As I was reading it, it reminded me of a wonderful story by Edgar Allan Poe. In the Maelzel's Chess-Player story (1836) Edgar Allan Poe gives all the arguments to detract a supposedly automaton chess player Maelzel was exposing. The really interesting part is that trhough couple of his arguments you can see how limited their current understanding of what a machine could and could not do was at that time. Specially the two things that were apparently true then, but not some ~150 years later were:

"1. The moves of the Turk are not made at regular intervals of time, but accommodate themselves to the moves of the antagonist ... The fact then of irregularity, when regularity might have been so easily attained, goes to prove that regularity is unimportant to the action of the Automaton--in other words, that the Automaton is not a pure machine."

"3. The Automaton does not invariably win the game. Were the machine a pure machine this would not be the case--it would always win. The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game--a farther extension would enable it to win all games--that is, to beat any possible game of an antagonist. ..."

Some years later we could all see a machine that would play chess and for some time we could see it both take different intervals answering and win as well as loose a game. Today that the chess game computers are more sofisticated we might not even see a chess computer loose a game anymore, and the answer eventually might become so instantly that might be perceived as coming at regular intervals, but, still, by the time I read this story for the first time, I could see clearly that the "current" notion of what a computer can or can't do might and most likely will be wrong.


That's why I loved also this post with the differences between the brain and computers as we understand them today. Apart from being a very good detailed article, I loved it because I think it might be the kind of story we (and note I'm not saying our grandchildren) can look at in the future (near future I'd think because of the law of acceleration returns or exponential growth of technology) and realize how technology is changing and our notion of what technology is needs to change accordingly.

On the differences themselves, I believe brains and computers are different, they will continue to be. Computers will have some abilities (specifically regarding memory capacity and possibilities) that brains will only acquire by merging with technology, that's why it's important to think on how we will merge and how we design intelligent technology sooner than later (although it might be out of our control some day).

Saturday, December 01, 2007

Will Genexus bet on android?


I'm just assuming and hoping that we do find in Android the standard mobile platform that was so long awaited in order to make mobile evolve into what it will become. That's just a bet so far, I wonder if Genexus will bet on that ...

I guess it shouldn't be too hard considering that it's only adding the Android SDK flavor to the existing java generator.

Imagining the 10th dimension video


I found this video that explains and builds up the dimensions starting from a dot to the 10th dimension in a pretty accessible way.
In the end it ties up with string theory, which confuses me a little as I thought they needed 11 dimensions to close up the string theory, although reviewing it now that seems incorrect, 10 seems to be the case.
Anyway, if you ever wondered how to imagine any dimension above the 3rd and 4th this video, based on the book "Imagining the tenth dimension" by Rob Bryanton (the book doesn't seem as promising as the simple video explanation), will help you close the loop.

What does yoga and project management have in common?


Yoga is a non-competitive sport. One of my teachers gives us for every pose three alternatives: the pose itself, modifications for beginners and the advanced pose. When I was thinking on what's the best way to give requirements to my developers I had this image of my yoga teacher's three levels and I thought it makes a lot of sense to apply this same concept when I give specs (specifications) to resources.

For any one feature we want to implement, there is the functionality itself we want to implement, things we might like to have but are just nice to haves and things that would make the feature implementation just outstanding. When I give specs to my developers I like to give them options. This way, if something is getting them stuck and it was not a total must to implement the feature I won't be waiting a long time for something that was not essential to the goal in question. On the other hand, if all goes super well, they are aware of what the cherry in the cake would be and they will tend to implement the optimal solution if it's not totally out of they way.

Saturday, November 24, 2007

The giant global graph, where Semantic Web and Web 2.0 meet


For the first time we're seeing major agreement between what were two completely separated philosophies on the web (except for the occasional comments on each side crossing to the other).



Last week Tim Berners-Lee blogged about the giant global graph (ggg), a new layer of abstraction on top of the net (internet, linking computers) and the web (linking documents).
"Now, people are making another mental move. There is realization now, "It's not the documents, it is the things they are about which are important". Obvious, really." says Tim Berners-Lee.

In the end, the so long awaited convergence between Semantic Web and Web 2.0 are graphs like FOAF (friend of a friend).
This was the first graph to became increasingly important recently covering a user need to own their social networks instead of having one particular social website own it for them. Now, for the first time both web 2.0 community and Tim B-L are mentioning each other on their blogs in agreement on graphs and web evolution. It seems like the first step for the re-converted Semantic Web is out there, not too surprisingly pushed by the social net needs, even if it will expand to many other areas.


One more extract from Tim's blog that shows the vision for the future (so we don't get too caught up on the social network thing):
"In the long term vision, thinking in terms of the graph rather than the web is critical to us making best use of the mobile web, the zoo of wildy differing devices which will give us access to the system. Then, when I book a flight it is the flight that interests me. Not the flight page on the travel site, or the flight page on the airline site, but the URI (issued by the airlines) of the flight itself. That's what I will bookmark. And whichever device I use to look up the bookmark, phone or office wall, it will access a situation-appropriate view of an integration of everything I know about that flight from different sources. The task of booking and taking the flight will involve many interactions. And all throughout them, that task and the flight will be primary things in my awareness, the websites involved will be secondary things, and the network and the devices tertiary."

Thinking space has an interesting point when analyzing the appearance of the ggg. The new graph abstraction might be an early indicator of a switch in the path of web evolution. Yihong Ding, envisions a future web that is viewer oriented instead of publisher oriented. I like this idea of users getting to their own personalized view of the web on this new layer of abstraction. Each user's cyberworld would be the intersection between their own graphs and the graphs out there.

This is getting exciting!
I wonder what would be next? The big bionic brain (bbb, not to say big brave brain) maybe? ;)

Tuesday, November 20, 2007

More neuro-feedback and neuro-implants experiments

Via Neurophilosophy I came across three great examples of neuro-implants and neuro-implants that show how fast we're getting there:

1. Neuro-feedback: Brainloop, a brain-computer interface for Google Earth



2.
Speech neural implant which could soon enable a paralyzed man to talk again through recognizing brain-waves associated with sounds and ran by a text to speech computer program.

3. Attempt to produce collective music from brainwaves and heartbeats. Probably the idea was better than the results in this experiment ...

Saturday, November 17, 2007

Digital art:: Flash Oz infinite world


Spend some minutes (3.5 depending on the speed) traveling through this imaginary recursive flash scenario: Oz infinite world
[via connecting the dots]

Cheating on sleep


I read this article on Wired about the possibility of cheating on sleep. It basically says that when we sleep at night it's not so important how many hours total we sleep but the number of complete sleep cycles we go through. Apparently as we sleep at night our brain goes through cycles with 5 different phases for a total of 90 min, three of them are relevant: 65 min of normal non-REM (rapid eye movement) sleep, 20 min of REM sleep plus final 5 min non-REM sleep. The REM phases will become longer during later sleep. This studies show that if we were to wake up naturally with no alarms or external intervention we'd sleep multiples of 90 min such as 4.5 hs, 6hs, 7.5 hs, 9hs, etc. The idea is that if we sleep complete cycles you'll have a better rest. A person who sleeps only 6hs (4 cycles) will feel more rested than someone who slept 8 or 10hs. Wait a min ... that sounds familiar!

I've been experiencing with this information for some nights and it seems to be true! For now I am only looking at the time every time I wake up in the middle of the night to see if I indeed slept through 90 min cycles, and it's been happening pretty consistently, no matter what time I go to sleep I'm waking up after 3hs and 6hs ...

Next step is to try and actually get up from bed at the end of the cycle closest to my target time (the time around which I intend to be up). We'll see if I can cheat sleep even a bit ;)

As I was reading this article I was remembering of "The new everyday: views on ambient intelligence" book from Philips Research. In this book they analyze very open mindly future technologies. They particularly talk about ways of replacing the traditional sound of an alarm clock with a diffused light system capable of creating an atmosphere in the bedroom that would wake you up in a nice natural way.

How cool would it be to have sensors detecting your brain waves and waking you up naturally and nicely when the cycle closest (and before) to your target time comes up? I'd love it!!

This reminds me of the phrase: "Imagine the possibility. Create the reality".

Friday, November 16, 2007

Why users matter more than ever before?


Users always have been a big part of the software development process but recently I've been thinking that the role of users today is more important than it ever was.

In which phases and how should we care involving users?

1. Requirements. This is the most obvious one. Users are a key part of the requirements definition. This was true in the old times too, but today users know what they want in a very precise way. In the old times we were building systems for users that never used a computer system before. Users were 100% computer-naive (as opposed to today's computer-aware users).
When I studied systems in the late 80s users would provide the field knowledge, the "what", and engineers, analysts and developers would provide the "how to" part of the implementation. Today, the users are expert windows and web users, they know what to expect from a system, an interface and much more. I think users today know what they want and without losing our objectivity and obviously adding our own "know how" we should listen very carefully our users requirements.

2. User's ownership towards a successful system implantation. A system that was conceived and developed with user's ownership and commitment will be successfully implemented and installed. On the other hand, a system that fails to implant successfully will in most cases unveil that the users that should be committed to it were not identified and integrated properly. What users are we talking about in this particular step? for an internal or intranet system it would be: end users, marketing team, support team, sales team. For an internet system it is more complex and is explained better in specific literature such as "crossing the chasm", but basically it would be: alpha geek users, beta users, core users, vertical/extended users, mass users.
I had my own aha moment regarding this subjects in my early years when at the moment of rolling out a system, that from my perspective was just perfect, there were all kinds of obstacles and we could not get a successful implantation, and of course the users did not own the product, they didn't feel any commitment to it, they even felt menaced and challenged by the system and it just didn't work out.

3. Testing and system evolution. A system that is not used by users will never improve. What makes a system evolve is just one thing: USERS. You can plan to improve a system as much as you want, but what will really improve a system will be real people with real problems or real experiences on the system. That's why it's so important for systems to achieve a critical mass of beta-testers and core users that will push the product to its limits and make it evolve and grow. This is more true than ever also for web applications. Most of the users are educated in what they would expect from a system and they will ask for that, anonymously, with their names. No matter how, if you have a critical number of users you will get the feedback that your site needs to evolve.

So, more than ever before, users are a key part of the software development process and how much they're considered might make the difference between system's success and failure.

It might not be too far away that users can build their own systems, plus some people are already talking of self-improving systems, so things will keep changing ...

Friday, November 09, 2007

DVR obsession


It's known that we get too easily used to the nice stuff. Technology such as Digital Video Recording just spoils us. After a while of having this feature on my cable box it seems so natural that I just can't believe I lived without it before. Not only this takes care of the kids interruptions when you're watching a show, allows you to skip through publicity and compresses show time by 40%, it allows you to re-play something you can't understand or need more details about and frees you from having to remember shows days and times. This is just the beginning ...

What's funny about using DVR technology is that I developed such a dependency on this technology that now I find myself trying to rewind a radio program I'm listening in the car radio or event worst, trying to rewind some phone conversation or some real life conversation or situation. It just makes so much sense that my brain can't tell the difference between the different media, it just loves this feature and wants to have it all over the place.

Most likely it's my bad memory that's really driving my DVR obsession, but I can see a near future where both this automatic and explicit recording capabilities become pervasive. Can you imagine the possibilities as video technology itself evolves into digital search by image, sound, GPS positioning, date-time? Show me the video at the moment when I was at the entrance of the Zoo or, show me the part when Nicole showed up in the house or, show me the part when he said 'bla bla bla'. Can't wait ...

I believe there is also a value in having a DVR kind of thing for the computer usage having the latest sequences saved automatically as well as being able to purposely save computer usage images and commands and have them replay at a later time with a similar functionality as the DVR for TV. This could have a great value in many applications specifically for computer learning or knowledge management applications.

Digital footprint


I guess it was mostly because of the fires that I started thinking strongly about the importance of digitalizing our life, specially our past (as our present and future are luckily pretty digital anyway).

When faced with the possibility of evacuation couple weeks ago I kind of had to go on my mind and even physically over the things that I must have taken for sure with me. Among those, there were in my case documents (such as immigration paperwork, degrees, insurance paperwork) and old photo albums that could easily be scanned into a digital format. Then, of course there's the things like family treasures or trip's souvenirs that you just can't compress into a DVD (a video recording of them would be with our current technology the most you could get in this case).

Anyway, even if I probably won't find the time to digitize my past I still think it'd be a great idea to have everything converted to digital format. Then, moved to a server with some redundancy and that's it, you're free to pretty much go through life without the heavy baggage or at least without worrying or risking losing stuff that could be irreplaceable.

It's amazing to see how much smaller our digital footprint is compared with our physical one ...

Sunday, November 04, 2007

The best analysis of technology and evolution ever written


I finished reading "The singularity is near" by Ray Kurzweil (yes, I did! ;). I don't think there's another book out there with such a fine analysis of technology's evolution and evolution of humankind, or for that matter another person that has thought as deeply and thoroughly on this subject.

The book really pours information all the way from beginning to end regarding technology in general, computing, nano technology, robotics, genetics, physics and more. Obviously there's a lot of speculation (as it ought to be in the field being) it makes a lot of sense and it asks the right questions. I have to admit this is my first Kurzweil book, so I don't know how it'd feel for someone that already read previous ones.

We needed a new Carl Sagan. Kurzweil not only knows and thinks incredibly about the matter of evolution, in addition he seems to have the decision and the capacity to communicate his ideas even risking to be seen as mad or naive.

It seems like the movie is coming ... this is very interesting as it might have the capability to make massive information that otherwise might be hard to consume.

It couldn't fail for me, my favorite subject in life combined with a great thinker :)

Friday, November 02, 2007

Design for functionality

It's been a long way since the old structured menu driven windows applications. Before, the user would be entering an invoice, the inventory for an item would be low and the user would have to go totally outside their current action and remember to go to some other menu to add the item to a provider's purchase order. Today, with the web, the user would expect to see within close access a place to at least mark that item to be added to his next PO.


I'm not sure when this trend exactly started but I noticed sometime ago a change in the way we design web applications. Particularly I love the web designs that have functionality in mind and as a result actions are offered to users in the place/moment where/when they might need it.





I just started adding functionality with this idea for couple weeks now and I like the results. Examples of this are to offer "Your logo here" in a spot where a client could view their logo, or "click here to change your ..." right at the place where users are seeing the results in action filtering something for them.

Blogger got it regarding tracking comments


A partial simple solution to a common old problem ...

Friday, October 19, 2007

Amazon "Outfit your office"


Amazon includes as part of their Business Center a nice component called "Outfit your office" to take advantage of a compacted visual presentation of their office product's lines.

It's all HTML+ajax, although it'd look much better and react faster if implemented with Silverlight or Flash.

This is not the first time we see similar visual presentations. I remember back in the late 90's the site of Southwest had something similar (no ajax of course). I guess the difference is when Amazon does something it usually have some repercussions.

Monday, October 15, 2007

Brain-computer interface for Second Life


This blog talks about a Japanese experiment to control an SL avatar with a brain-computer interface.

I wonder when I will be able to try this ... Some time ago someone came to my co-worker Daniel and me to offer us some device that would measure the brain waves and stuff. We never had the time to play with it and it was still pricey to get it just for fun.

I'm just so curious about this type of technology, it has a huge potential, and it seems to be happening sooner than we expect.

Sunday, October 14, 2007

Do entangled particles communicate faster than the speed of light?


Finally I understood a little more about the entangled particles mystery. Entangled particles are two particles that being created together remain somehow linked in a way that while a given property is not determined in either particle, the resolution of this ambiguity will occur at the same time in both particles.

One experiment that clarifies this phenomenon was done by Nicolas Gisin, University of Geneva, who "sent two quantum-entangled photons in opposite directions through optical fibers across Geneva. When the photons were several miles apart, they each encountered a glass plate. Each photon had to "decide" whether to pass through or bounce off the plate (which previous experiments with non-quantum-entangled photons have shown to be a random choice). Yet because the two photons were quantum entangled, they made the same decision at the same moment." (from Ray Kurzweil in "The Singularity is near")

There are two possible explanations for this observation.

1. The two particles are indeed entangled as their name suggests and they can send information to each other faster than the speed of light, this way when one faces the decision the other gets communicated this decision and does the same thing. This is by far the most interesting explanation as there could be ways to use this faster than the speed of light transmission. The more I think about it, the more unlikely this seems to be true ... Why would one particle have to make the decision first than the other and the second one be just a listener? Would the transmitter/listener role be just set by time? Meaning the first one particle to reach the glass plate is the one that decides and then the second one just listens? Unless they just talk to each other and decide together in a two way faster than the speed of light communication? I'm not sure I understand the 100% of the experiment, but I guess it'd be interesting to delay the moment in which one of the two particles reaches the glass plate and see if still the correlation happens. Unless the time is key for the particles to know the event is related and they need to react the same.

2. The second explanation discards any communication between the two particles and explains the correlation of their "decision" to the presence of hidden variables that both particles have set in the same way at creation time which shows as the entanglement. This one seems even if not as exciting as the first explanation it seems worth pursuing. I wonder what that hidden variable would be. I'm sure more experiments are on their way ...

It's amazing how little we know or another way to say it, back to Newton's never so true over cited phrase:
"I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the
great ocean of truth lay all undiscovered before me."

Wednesday, October 10, 2007

New payment method: It's up to you!


Time Magazine published an article on Radiohead making history these days by innovating on the launching of their new album "In rainbows". As the image shows, the downloadable version of the album can be purchased for a price that is set by the consumer. It could be 0 to anything (maxing 80 as that would buy you the deluxe boxed version).
When you click on purchase you get to a page where you are supposed to enter the price in pounds you want to pay. If you click on the help icon it'll tell you 'It's up to you" and if you click the help for the second time you'll get the 'Really. It's up to you".



This powerful move in an industry that is already pretty shaken will certainly make waves. Basically they cut out on the labels and middlemen and brilliantly innovated with the payment method most likely planting the seeds for a successful tour for 2008.

Unfortunately the site seems to be pretty swamped and is really slow. For that reason or for some other technical problem I could not finish my transaction or download that was supposed to be available as of Oct 10th (today;) despite multiple attempts.

update: I could finally buy it and download it too ...

Saturday, October 06, 2007

Blogger keeps thinking it knows what language I speak ...


It's very annoying ... Blogger thinks I speak German now because I'm accessing it from Germany ...

Anyway, it's time for me to go to sleep and my melatonin is not making the trick today.

Now I have my soup


I started a few days ago my soup.

There are other tumbelog sites such as tumbler.

Tumblelog is a type of blogging that integrates different sources and formats in one unique blog.

My fascination with blogs in all its variants is totally beyond my own blogging needs that are not so huge themselves. I think it has to do with how we're increasingly becoming a broadcasting culture. The question that keeps popping up in my mind when I think about this increasing broadcasting obsession is how are we going to keep up with consuming all these humongous exponentially growing amounts of information. I don't have a definite answer although I suspect that we'll increasingly need computer service agents that will filter info for us and learn from our priorities, interests, needs, etc.

Friday, September 28, 2007

What would it take to make wikis mainstream?


If wikis are as I would guess the future of digital knowledge management why is that the adoption has been so slow? Plus, what is the tool or click that's missing to make them mainstream?

We have to be very careful when we look at wikis adoption as Bradley Horowitz suggested some time ago there are three different groups of users: creators, synthesizers and consumers. Basically what it means is that when we look at wikis adoption we might have a tendency to look at the creators/synthesizers and that would not be correct. You could estimate that every 1 creator you'll find 10 synthesizers and 100 consumers.



One thing wikis certainly usually lack is a nice and clear design. Maybe the killer wiki application will just give a new look to the same'old thing. Maybe most of the enphasis of those developing wiki software so far was centered around the creators and sinthesizers and it's overlooking the majority of the population that is actually going to make wikis mainstream.

I was looking a little bit of the history of blogs, it looks like there could be lessons to be learned from blogs evolution.

One tool that could read any wiki seems to be one of the things wikis are lacking. We do not have such a thing as a wiki reader where I could concentrate my wiki reading needs. This could be because of a lack of a proper wiki rss publishing standard, maybe it just didn't occur to people or maybe it's just a bad idea, I'm just thinking loud without having really researched much.

Even if such a tool were to be developed, there is what I think is the main issue with wikis and it is that information seems to be lost inside. It would be really nice to have different paths that would give me a wiki tour according to some pre-established need that I could have. So, for instance if I'm looking for a quick introduction I'd follow JoePath Doe's suggestion and I'd had an easy sequential way of looking at things, and once I choose this path I could read the wiki as a traditional book page by page without being overloaded by the vast wiki complexity, although of course everything is hyper-linked and I could decide to get out of this path at any time and just browse back on the sea. So there would be a need for a tool that could create the illusion to a user that he's actually reading a book (even graphically-wise), although the richness of the wiki can emerge at any point. Also, collaboration tools for people to share their favorite paths into a wiki would be important and this would keep synthesizers's hands busy giving them a nice place in the community even if they're not really creating content. This could work in cases such as a book written as a wiki where the only content editor is really the author(s) but they could still create a community on top and could use the same wiki reader and the same collaboration features.

Annotation and collaboration is indeed another thing that is kind of obscure for some of the wiki uses where it'd be interesting for the wiki to break in (such as the book industry itself). Here it looks like the wiki that was originally only focused on building content has the collaboration tools aside and almost hidden but when we think of a community sharing and adding just their notes and trying to see their friend's reaction to what was said in a particular part of the page it really needs to stand up graphically a whole lot more (with the ability to be hidden as well according to user's personalization/customization/settings).

Summarizing, I can imagine a universal wiki reader similar to an rss reader but much more specific to the subject, that can optionally flatten the wiki's world to hide some of the complexity to the user, highlights the collaborative and social features maybe being context-sensitive to the user's social network of preference, and focuses as much in the consumer of the wiki as in the synthesizers and the creators.

Wednesday, September 26, 2007

GeneXus XVII Event promising and inspiring


The GeneXus XVII Event was promising ... and it's kind of weird that I'm saying this of an event, but Rocha is a very promising version and the event was a lot centered in Rocha version which in turn made the event promising if that makes any sense.
Rocha is all that it promised and probably lots more, being in beta 1 it still has a way to go, but definitely it's the version that can make GX leap over to the next phase of being really worldwide adopted. Of course in order to be worldwide adopted it needs to get it's feet onto the US Market and that is a costly move or a too long postponed one ... there were great insights on this subject in Pablo Brenner's talk (funny one too).

As far as inspiring, I never ever attended any GeneXus event that was not inspiring and it's probably true for most of human gatherings where there's a synergistic energy going on and specially any type of shared passion.

I liked a lot Andres Levin's talk and knowing that Rich Internet Applications and separating interface from code will be addressed. I still have many presentations to watch as I find some time here and there.

I was surprised that the subject of collective intelligence (web 2.0) did not surface as a relevant one, it was already missing last year. Even if it's not relevant to the tool itself it sure should be of great importance for the community and I was expecting to see at least some of that but unless I totally missed it, it didn't seem to be there.

All in all a great event, enjoyable even remotely!

GeneXus XVII Event remote experience


This GeneXus Event I could not attend in person but the internet access to it was so good that I really could feel part of it to a great extent. It helped that as coincidentally I was seek these days I couldn't do a whole lot more anyway ;)

Only couple things could have made my remote experience better:

1. It'd be nicer if the access to conference's videos would be available during the same day. I do have time differences with Uruguay and if I missed a conference live I could not see it until the next day when it was published.

2. It'd add a lot if there were other web cams live showing people just in the registration area, booth areas and general areas. Even if conferences are the main and absolute substance of the Event, the atmosphere lived in general and the social encounters are missed as well. Maybe there could even be a way to interact remotely, ask questions or otherwise. There seems to be a growing community and the local event will reach a physical limit (probably already did).

As I write this post I'm listening to Breogan Gonda announcing that the Rocha book will be down-loadable on their website. That's a great direction to go.

Sunday, September 23, 2007

Semantic Web or just Relational Web?


Both Tim O'Reilly and Alex Iskold from Read/WriteWeb have been talking about the Semantic Web this week. Any prediction about the web of the future involves in a way or another the Semantic Web. The reason is simple, we need to be able to have a better quality of search on the web. We want to be able to get all the packages out there for fall Hawaii vacations or all the 8MP digital cameras we can get below $200 that can ship to California with no taxes and overnight shipping.

Many years ago, Tim Berners Lee came up with a vision of a Semantic Web that would eventually be able to have computer agents looking for all these answers for us. The vision was basically about annotating the existing web in a way computers could understand. Adding tags with metadata representing ontologies (RDF mainly) that would allow computers to infer stuff and using a SQL-like language (SPARQL) retrieve meaningful information. The problem is how to produce/have such an annotated web?

There are basically two ways to do it as Alex Iskold summarizes very well:
1. Top-Down approach, computer tools allow the existing web to be converted into semantic representations (ontologies on RDF format most likely). In order to produce such representation the computer needs to understand semantics. To me this is the real Semantic Web and it is not as far as we imagine but is obviously not here right now, there are some hurdles and time to go ...

2. Bottom-Up approach. All the bottom-down approaches involve manual changes on existing HTMLs and applications producing HTML in order to add to the visible layer another layer with metadata. People build ontologies manually or with help of tools. This is a very consuming process that for now has only be adopted by highly scientific environments with extreme need of formalization. It basically has only worked in very vertical markets and areas.

The Semantic Web is not yet a reality despite being out there as a concept for many years because the top-down approach is ahead of us in terms of our current state of art and the bottom-down approach is high maintenance, to the full extent really impracticable and industry has not adopted it.

So, by definition the Semantic Web is one where computers have the ability to understand semantics. How can computers understand semantics? The only way for that to happen is for computers to understand natural language and to be able to build models of reality with their complex relationships as we humans do. Can we have a Semantic Web without computers understanding natural language? I think the answer is no. The reason is that we can give computers metadata for them to consume, and that will give us better quality information and searches, but there will always be stuff that is not annotated and is falling off the Semantic Web.

What are the alternatives and possible evolution paths from where we're at right now?
1. The partial annotation with industry standard and de-facto standards could prevail. Microformats, standard metadata for contacts, social networks, calendar events, geographical info among other. Seems like the right tool at the right moment. (bottom-down)
2. Browser toolbars and extensions that mine HTML pages in search of phone numbers (like skype recently tried not too successfully) or addresses (like Firefox extension Map+). (top-down)
3. Global relational databases such as Metadata could make a difference if they could become a standard and mine the web accordingly as well as being used in combination with pipes for retrieval. (mix)

I love all these approaches, I just wouldn't try to call them "Semantic Web", it's more like adding some semantics to the web, that's why some people call it "semantic web" (no uppercase). I see it more like part of the "Relational Web" as far as the computer goes and the web itself it does not have the semantic capability by adding these improvements we're just adding to the relational capability of the web. There are other great things happening in the "Relational Web" right now ...

Anyway, regardless of which way the web will evolve and how we'll name it, it's happening and it's very exciting to witness :)

ORT Student Journeys: Gonda and Dolder


My friend Mauro Canziani mentioned a few days ago that he was attending the X Student Journeys at ORT University. That brought great memories from probably one of the first edition of these Journeys which I had the opportunity to attend many many years ago (I'm almost sure it was way more than X so it looks like they changed names at some point).

There were two conferences that had memorable instants for me at that event:

1. Breogan Gonda talked about a project he had on development stage that sounded very promising at that time and he closed he's conference with a very empowering concept pointing to the unlimited potential of ideas stating "We've got the imagination". "The imagination" materialized over the years in the greatest development tool ever: GeneXus, which relevance increases as it gets harder and harder to keep up with technology's pace in today's world of exponential growing.

2. Herman Dolder explained (to me for the first time) the process of learning. The main thing I got out of his presentation was that in order to learn you need to attach new concepts to existing concepts in your mind. If you don't have the little pieces to attach the new information you simply can't learn. That's why good teachers present information in different ways so different students with different previous concepts or views can do the link. The "aha moment" is unique and particular to each student and it involves a matching process between what I know and what I incorporate as new knowledge. As a corollary, learning hurts. The reason is that sometimes in order to be able to match new info with existing info we need to re-accommodate what we know and this is a process that consumes energy and generates "mind pain". Knowing this helped me to recognize and embrace this pain in my mind as I learned, which seems to be fundamental to acquire the flexibility to learn in the least limited way possible.

Funny coincidence after all. In writing this post I realized that Herman Dolder is pursuing a development tool, based in the Model Driven System. In this paper he compares such a tool with Program Driven Systems and Application Generators (such as GeneXus). As far as the theory it's pretty interesting, as he states in the paper Model Driven Systems would be the holy grail of computer science (as any other project involving AI). On the other hand, if I was pursuing such a tool I'd ride on top of existing technologies and use for instance GeneXus as the program generator. Actually today, GeneXus would be in a better position than ever before as it today has a pattern generator that can generate GeneXus objects as needed by an external tool. It would have made a lot of sense for a tool to be designed that would in the end through the pattern generator use the GeneXus tool to generate the best database schema as well as the best platform dependent programming code instead of re-inventing the wheel ...


Back to the ORT Student Journeys themselves they had a very nice robot development contest on speed, orientation, search and transportation, all done with LEGO kits. Must have been lots of fun!

As you can tell I didn't have the time this year to make it to the GeneXus Event so I still have time to be thinking out of the blue stuff ;)

Wednesday, September 19, 2007

Users will always transcend product's design intent




What developer has not been around a user that explained to them an extra use they found of a product that they as the designers had no idea could be accomplished with the developed tool? Probably none. Sooner or later a user will come to you and explain you how they are coming around that issue of this and that and they're being able to make the system calculate this extra thing for them that you had no idea the system could.

The other day we were looking at the Google Street view of San Diego area with couple co-workers and I was wooing just imagining the possibilities. I know of all the security controversia and stuff but still I think it's sooo cool to have a tool like that. One of my co-workers to my surprise goes, who's going to use this? I don't see much use for it in addition to the security concerns, bla bla bla ... That's when I was reminded one more time of this maxima: "users will find ways to use any tool in many more ways that it was originally intended" and I thought, this in itself is a universal truth about technology! ask Einstein ;)

Sequential is baaad


One of the most enlightening moments on my process of becoming computer literate was when, in Programming II, I was taught about file accesses, at that time in Cobol. We basically started learning about sequential files and we did all the boring processes that could be done with sequential files and later on the semester we were introduced to the concept of indexed files. It was like going from night to day or another way of saying it that I like better was going from the real world to the digital world. There were two great insights at the same time:
1. Sequential is baaad. Indexed is good. This is the obvious one, it's a simple matter of optimizing accesses.
2. In real world objects occupy a place in space, but this is not completely true in the digital world. In another words, in the digital world I can access a list of people at the same time by last name and by date of birth. It's the same information, it's there once but can be accessed instantly (almost) by different concepts. I believe as we get into more digital and virtual spaces in our lives we'll feel the presence of this concept more tangibly. Think about it: a virtual person could be in multiple worlds at the same time ...

It probably all seems to obvious and irrelevant. So, why is this still all relevant to me? Because there are plenty of things out there that are still essentially very sequential media and that totally bugs me. Two examples of this are videos and books. Other than the basic chapter index the idea is that you consume both a video or a book sequentially. This is a very limiting way of consuming information. When you're talking about a story, it's a little more arguably, but when it comes to consuming information it is definitely not a good option.

Particularly with books, I struggle a lot when I see digital book readers that are just a transportation of the old sequential book model to a new hardware. On the other hand I've been thinking a lot on alternatives without getting to nothing conclusive. The alternatives I though of, clearly involve hypertext, picking on the concept of summarizing and allowing the reader to drill down in the information as needed. But of course, the business of writing a book hasn't changed, the way people write books hasn't changed, so it's a whole chain that will need to be re-invented even as an economic model in order to have the book of the future.

One last thought on books, as we were discussing in a prev post, one alternative would be somewhere around wikis and blogs (in all variants). Wikis being much more powerful than blogs never became as popular. This is something I've been thinking for a long time and can't seem to find a def answer to ... I believe some of the answers are around the fact that wikis (and hypertext itself for that matter) are not flat and sequential. They're graphs and rather hierarchical. Blogs by opposition are pretty sequential. So, I wonder if there's a limitation in our minds that (untrained) they don't come with a natural capacity to deal with the non-flat non-sequential media. Maybe it's part of our evolution to little by little get into these non-flat spaces or maybe we'll find by the means of virtual technology a way to present digital information in ways that look flat even if they're not ... Not sure I'm making total sense here, just thinking out loud ;)

Anyway the subject of mixing wikis and blogs is a recurrent one around the blogsphere and I suspect an important one too ...

Raising kids: another way of programming


Today while doing homework with Nicole I came across the realization that my number one task as a parent is not that different from my full time job.

Out of nowhere she tells me "I don't cheat at school, because I want to learn". I was like, oh my god, she does listen to me! At that point it was to me like seeing your application run, seeing the program working and then I thought, wow, it's all about programming. Every little talk, every little conversation become meaningful. I feel now as I talk to my kids I'm debugging, twitching, improving, as life enfolds and the programs get tested on the field. Coming back to a wonderful book by Marvin Minsky, "The emotion machine" (which I didn't ever finish BTW;) pride and shame would be two of the most basic ways goals get programmed into kids minds, but it gets so much better when you can consciously program them, it becomes another ride, an extra one :)

Obviously, when it comes to kids, you're not alone at the task. Teachers, peers, friends and family, media, will also do their own programming on them, and there's always genetics too which will indicate which programs flow better on their hardware as well.

I guess more importantly than providing the actual programs in this case, would be to give them the tools to program themselves. To assure them that they have programs in their minds and that they can change them, that they actually have to be monitoring, twitching and playing with those programs for life.

Friday, September 07, 2007

GeneXus Planet's ecology


I love being part of Planeta GeneXus. The downside though for me is that I feel a little restricted in my expression capability.

I started this blog to talk about my number one passion in life that is evolution, and I'm kind of getting far apart from my main goal as time goes by ... I think I'll come back to my original blog intention and then wait for some comments wanting to kick me out of the planet ;)

When you're in a Planet you need to be aware of the Planet's ecology ... then out of respect for the Planet's environment I end up not doing some posts that are too off-topic or too-personal or I feel would just contaminate in an excessive way.

I guess it's a fair trade-off, fraternity vs. freedom ;)

That's why I decided to start trying a separate blog for my personal/spiritual dimensions called Natural Life Highs.

As others in the planet I'm wondering if this is the way to go ... I'd much rather have only one blog than multiple, and much even I'd like to have a blog with different spaces on it broadcasting to different planets as needed ... I might be able to do something just by changing my home page and dividing it in spaces and publishing the last post of each space. I haven't look into the template capabilities that far .... maybe I need another tool, or new software might needs to be developed ...

Wednesday, September 05, 2007

Your point of view ad campaing



Sometimes a little detail can make a big difference. In the walls of the corridors inside London Heathrow Airport there is a massive campaign by HSBC with a tolerance message beautifully presented. Subjects were extensive going from work/play, vacation/hell, art/science, art/garbage, chaos/perfect and traditional/trendy like in the photo. All of the gallery can be seen here.

A side note for my friends in London, I only was in London long enough to run and catch a flight to Prague :(

If this post looks weird in any way it must be because blogger.com thinks I understand czech and does not offer an easy way for me to switch to english ... talking about going to far in product usability and assuming user behavior.