The user\’s eye

October 29, 2007

The price of music

Filed under: business models,contents — Diego Urdiales @ 0:57

MusicAs an internet user and Radiohead fan, John was happy to hear that the band would release their latest album in mp3 format for download in their webpage on a you-set-the-price basis – even for free if you decide not to pay. He was determined to visit the page as soon as the album was released and download it; he would even give some compensation for it. However, the release date came and John was away from home those days, so he could not download it. Then he forgot, and some days later, while searching for some music torrents to download, he thought of it again… only to write “Radiohead – In Rainbows” in the search box in the end.

John’s behaviour is only an example made up to illustrate just how difficult a road the record companies and artists have ahead of them in order to be able to monetise music downloads through the internet – to obtain profit from digital music. The future of file sharing P2P networks has been a hot topic, open to multiple visions, for a long time now (see for instance these visions from 2001-02). I honestly do not know whether one day illegal content sharing will be put to an end, or on the contrary, contents will become essentially free as P2P networks eat out on legal distribution methods. There are a few factors to think the outcome will be either way, let me just concentrate on two.

I honestly believe that most people, when confonted with two equivalent ways of doing something, one legal and one illegal, would choose the legal way. Even if the legal way was slightly less convenient than the illegal one, people would still choose to be legal, be it because of deep ethical convictions or just to avoid getting into trouble. So there is hope.

However time is a more important factor than is usually regarded. It is the time that has to go by before legal methods are as convenient and generally widespread as illegal ones (iTunes is just one tiny step), plus the time advantage that P2P software have over the new legal methods to come. You just cannot convince millions of users to stop using a service that they have learned, become used to, and which takes up an important part of their time online, and get them to learn a myriad of new, different ways of doing essentially the same thing, only more expensive. Legality is a small weight on one scale of the balance, it will need many more to be heaviest. Users are lazy and have huge inertia – they have invested a lot of time and effort to master Napster, then Kazaa, then Bit Torrent, and something really good will need to come to push them to learn again. In the meantime, they will continue to resort to the P2P search box.

October 22, 2007

The importance of good looks

Filed under: RIA,web 2.0 — Diego Urdiales @ 0:58

The importance of good looksImagine the web site of an architect’s studio, an advertising company or a site to promote the launch of a new product. Inevitably, you think of something very attractive to the eye; whether or not you are aware, your imagination will draw RIAs. Rich Internet Applications, a name generally applied to web applications based on advanced Javascript, Flash, AJAX, Silverlight and the like, are changing the way people interact with the web. For a user, the difference between a web page with buttons, text fields and other standard GUI components, and a RIA is like the difference between reading the biography and being the guy yourself. Some well built RIAs truly make users believe they are part of a fully interactive multimedia experience. And that can make all the difference for a user, even shadowing flaws in functionality, usability or performance.

If the web is the ultimate application platform, RIAs will definitely play a major role in the near future. But attracting users is one thing, and keeping them coming for more is another. Too often, RIA developers get carried away by the wealth of opportunities that these technologies bring them and sacrifice other aspects that can make it an annoying experience for users when they go beyond the flashy first impression. Such aspects include the difficulty for users to get used to UI controls that are non-standard in appearence, behaviour or because of their non-observance of the tab order; the perception of the whole web page as a single monolithic block, whose components and texts cannot be individually selected or changed; the increased loading times, and adding to this the fact that the page cannot be used until it is fully loaded; and the inability to take control of the application in order to, for instance, resize it or go back to the previous step.

Intermediate solutions that integrate RIA technologies in regular web pages (such as pages that use AJAX to refresh one of their components) will help bridge the gap to full-fledge RIAs to the user’s eye.

October 15, 2007

Personalising search

Filed under: google,search — Diego Urdiales @ 0:40

SearchIt is a few months now since Google launched Personalized Search, a new service that tracks your search history in the hope of providing you with better search results in the future. Take for instance a search for “java”: if Google had detected that you are more of the travelling (or Indonesian!) kind, it would show you results concerning the island of Java first; if Google thought you are more of the geek kind, it would prioritise results dealing with the Java programming language.

With Google having long said that its ultimate goal is that you turn to Google whenever you have a problem, talking about personalised search seems a tiny step in that direction. However, it seems most of the users who tried personalised search after is launch were not impressed, according to a poll by Read/Write Web. Most were unable to find any improvement on the search resuts, some even saying that they had got worse! If we take into account that readers of RW/W are more towards the geek side of the spectrum, one can only expect that an extrapolation to a larger user sample would yield even more indifference.

I can understand how this fine-tunning of search results to adapt them to search history may not always be appreciated by users (let alone privacy concerns already discussed in this space). Googling for information has become an integral part of our everyday lives. We are already used to intefacing with Google to get the information we want, and we have become reasonably skilled at that. We have assumed that there is a special way to “talk to” Google, different from natural language (although some people do ask questions in the Google search box, and Google does well to interpret them). And, most importantly, we are used to the kind of results to expect from Google given a certain search query. Objectively, personalised search is certain to yield better results than regular search (especially with time, as we build a more extensive search history). However, subjectively, are users ready to get results that deviate from their expectations? Will they appreciate Google’s effort? It is that subjective subtlety – in general terms, its interface to humans – and not the technological difficulties of personalising search, that poses the biggest challenge for Google to achieve their long-term goal.

October 8, 2007

The mysteries of location

Filed under: mashups,web 2.0 — Diego Urdiales @ 0:55

MaleEven if some argue that half the population cannot read maps, we see more and more of them everywhere. After all, maps are widespread conventions to embed an always obscure piece of information: where something or someone is, i.e., location.

Location information in the form of maps has been available in the internet from the first days when an image could be embedded in a webpage. Pretty much in the same way as the web started by mimicking paper publications, maps were embedded as static images as in printed newspapers, leaflets or brochures. But today’s interactive, dynamic web has dramatically changed this. Tools like the phenomenal Google Maps (or its Yahoo! counterpart) have truly brought mapping to a new dimension, allowing dynamic mapping: re-pointing a map’s centre point, zooming in and out, placing information directly on top of a map in a graceful way… Users have soon come to get used to this sort of behaviour, and it now feels old and inconvenient to search for a restaurant or a museum and find a plain static map image showing how to get there in its web page, instead of an embedded Google Maps-like object.

With location, the power of the tools available has shadowed the possible implications of their use. As we struggle – inevitably some users more than others, since we are not all equally gifted for reading maps or interacting with computers, we start to learn the issues that the presentation of location information through dynamic displays brings up.

One of the key issues that have to be taken into account when embedding a map is the granularity of location information. A map can show a whole continent, of just a few street blocks; the province or just the town centre. Getting the map zoom wrong – the granularity – can turn a useful piece of mapping information into a misleading, frustrating experience for a user. This is not to mention the subtle difference in user perceptions conveyed by a map’s zoom factor and centre point: short distances can look long or vice versa, intimate, quiet hotels can look as being in the middle of nowhere, et cetera.

It is now so easy and widespread – even so fashionable – to embed a map to display information that publishers sometimes do not really consider other alternatives than geo-location. No matter if its an address, driving directions or stock prices or sales – everything can be displayed on a map. However, using mash-up maps where geography is not an important parameter can be confusing for users.

A further source of mapping confusion comes inevitably from the different user interfaces of the various maps and mapping tools. From the different representations and meanings of points or lines in the maps, to the colours used for streets, buildings, land or sea, to the different degrees of dynamicity offered by the tools (only display, display and re-point, zoom in and out…), users are confronted with non-standard interfaces, and therefore have to learn as they go; of course, not everybody is as experienced and as talented for this.

Dynamic location displays are an extremely useful tool; only publishers need to learn its mysteries. A standard user interface and clear usability guidelines will be a great help for all users to make the most of every map.

October 1, 2007

My self-on-the-net

Filed under: advertising,facebook,privacy,social networks,web 2.0 — Diego Urdiales @ 0:25

Putting my photos up in Flickr, Photobucket or Facebook when I get back from a trip or a gathering and want to share them with my fellows comes naturally. It is quite convenient to store them in a common place where they can view them, browse through them and download those they like best. Many people do the same with their pictures, some with their videos as well, even slideshows, bookmarks or audio. Sharing your own generated content, that is what web 2.0 is about.

Blogging is one of the most popular ways of expressing oneself as an individual (also as a group or corporation) in the network. But one need not go through the hassle of reading a person’s blog, if she has one, to learn a lot of information about her. We can be characterised by our UGC; one can argue that we are our UGC, if we extend its definition to the fruits of our productivity. However, many would shake if they are confronted with the fact that their own selves can be exposed publicly in the web – the same people that are happy contributors to web 2.0 sites aplenty.

The goal of this post is absolutely not to show how bold people are when they expose their lives in the web, but rather to point out the problems that may arise when we all start to realise what the implications are of not controlling the privacy of our selves-on-the-net. And this will happen, inevitably, when people and companies start to make the most of the information that is out there.

Have you ever tried googling yourself? Surely yes. Depending on how frequent a content publisher you are, how keen a member of social networking sites and how carefully you have set your privacy settings in them (plus, inevitably, how common a name you have), the results may depict an intriguing picture of yourself, maybe more accurate or detailed than you may want. Now try one of the people search engines, such as Spock. While the information obtained may be essentially the same as with Google, its orderly layout, in the form of a résumé, may scare you even more. All this information is aggregated from various sites where it is publicly available; and you realise that there is more that the search engine just has not been able to attach to you, but is out there, public in the net.

When Google starts delivering ads based on your search history, your preferences as expressed in your social network, your bookmarks saved, the feeds you are subscribed to, your internet purchase history and who knows what else more; when Facebook people searches start to mix your personal and professional selves due to cumbersome privacy settings and group management on your part; when you start getting targetted, contextualised spam – then is when things will start to get a bit more scary. Privacy is bound to become one hot topic in tomorrow’s internet, even hotter than it is today. And a clear, heads-up statement of a company of what information it gathers from you a a user, and what it uses it for, is desirable and will become obligatory – but will it increase users’ confidence? Will users continue to want to play, once they learn all the rules?

September 24, 2007

Cross-selling

Filed under: business models,mobile,mobile internet,network,TV — Diego Urdiales @ 0:45

Cross-sellingTogether with the rise in popularity of network operators (Nicaragua is just one of many examples), which operators promptly publicise, and their venture into more and more new markets and sectors – long gone are the days of the telco which only provided connectivity and voice calling – comes the rise in expectations for the users to enjoy a complete, branded experience of all the operator services. However, in many cases, this is far from being real.

Too often we hear users asking why, if my mobile operator and my home broadband connection provider are the same company, I need to see two separate bills, need to call different numbers whenever I have a problem, visit different websites, let alone why it seems that the mobile guys do not know that I am a broadband customer as well, and vice versa. This truly makes it more difficult, cumbersome and less appealing for the customer. Unfortunately, changing provider is not bound to help. A provider which takes a definite step forward and is able to smoothly integrate its product and service offering can gain a significant advantage against its competition.

Improving user perception, while important, is however not the only opportunity open for operators in this space. With the convergence of sectors previously far apart, and the move of operators from merely network to service providers, extensive opportunities for cross-selling are open which are not capitalised by operators. Only recently have we begun to see combined quadruple-play connectivity offerings for an integrated (at least from the sales point of view) experience of fixed and mobile internet, voice and TV. What about blending in value-added services, content provision, storage, Internet services… to the bundle? It is certainly technical reasons – CRM systems yet to be adapted to radical forms of cross-selling, and not commercial ones that are delaying this move, but I believe there is much to be gained by both users and operators in this field.

Thanks to my colleagues for the ideas that inspired this post

September 17, 2007

Do you know Wikipedia?

Filed under: web 2.0,wikipedia — Diego Urdiales @ 0:09

WikipediaWikipedia, the collaborative web-based encyclopedia project, is, in my view, one of the greatest feats of mankind in modern times. The fact that people have gathered spontaneously to engage in a project as ambitious as building an encyclopedia from scratch, and accomplished it with such success, is one of those things that make one keep the faith in humanity. However, personal opinions aside, it must not be forgotten that the essence and the core of Wikipedia is the ability of any user to edit it.

A few weeks ago, this property that truly defines Wikipedia seemed to have been forgotten by a large number of Spanish media – or did they never know that from the beginning? Following the launch of Wikiscanner, a tool that lets anybody see which people or organisations are behind a Wikipedia edit, a bunch of headlines, misleading to say the least, hit the front page of these media, full of more or less explicit accusations to various organisations of corrupting or manipulating Wikipedia articles. See this article for a great analysis of those headlines. What is even more surprising, some of the media were thrilled at how easy it is for any user to vandalise Wikipedia, making arbitrary deletions or changing true facts contained in an article, and proved it by performing some deletions live, in front of the cameras. I think it is needless to say that those journalists, and the media they represent, should be heavily punished not only for showing such a profound ignorance of what Wikipedia is and how it works, but most importantly for threatening the equilibrium and self-sustainment of Wikipedia by tempting other ignorant users (those, however, with an excuse to be so) to vandalise it.

Getting to the point, though, news stories like this one are symptomatic of the perception that users have of Wikipedia. Wikipedia is widely known (is is the 9th ranked web according to Alexa and accounts for an amazing 7% of all web traffic) and used mainstream as a reference, and its quality and reach are so high that, effectively, only the more savvy users who have dug into its history know that it is written collaboratively and it all began one day from scratch. It just does not look like it. Users expect as much accuracy, veracity and depth of Wikipedia as they would from any other encyclopedia, if not more. So far, however, Wikipedia has largely always managed to live up to those standards.

September 10, 2007

Web 2.0 stress

Filed under: facebook,social networks,twitter,web 2.0 — Diego Urdiales @ 0:17

Web 2.0 mapI see many of my friends, all of them of the non-geek kind, lately complaining about the amount of time that they need to devote, as a minimum, every time they log on to the internet. What was previously over and done with after a short check of the (one, at most two) webmail accounts, now takes minutes if not hours, while they browse the latest updates of their contacts in Twitter, the latest activity in Facebook, and, every once in a while, what is new in LinkedIn, Xing or any other social networking sites they are part of.

In addition to this, the real trouble comes when people, often encouraged by friends or family, have signed up in more and more Web 2.0 sites, and either

  • Have forgotten their passwords or login details
  • Have forgotten in which sites they actually had an account
  • And even more so, what each of those sites was about and what made them sign up in the first place

With a ever-growing myriad of Web 2.0 applications out there, each filling in a specific niche, each both slightly overlapping and slightly different from the rest, all so attractive an so easy to sign up to – striving to build a user base that can help them through the inevitable bubble burst, many users feel a sort of Web 2.0 stress. They feel increasingly incapable of managing all of their sign-ups, of taking advantage of all that is available for them, and some even feel frustrated for it.

There is certainly a huge opportunity out there for the company or the technology that fills that gap. A few attempts have been made already, from many different angles, and it is still unclear which, if any, will triumph.

The OpenID approach is to provide a single, independent sign-on (authentication), which can be used by any third party. In my opinion this is a great idea, being pushed forward in a standardised, open source approach. Again, its success will largely depend on its adoption by enough number of players in the space; its adoption by one of the big players, with many active users, would certainly be a boost.

A different approach, albeit in my opinion not better, is to create an aggregator of social identities, a site in which information (or even contents) from all the networks you are part of is shown, accessed through a single sign-on. Spokeo is one, see this Techcrunch article for other examples of such sites. To my mind, however, thinking of having to remember yet another username and password, even if it saves me from remebering many others, is not enough, unless I can be assured that this is the one; this of course can only be true for an open, standardised solution, to which OpenID is closest.

September 3, 2007

Network acronyms

Filed under: mobile internet,network — Diego Urdiales @ 0:53

Letter sequences originally known just to telecom engineers are now part of our everyday talk: ADSL, IP, GSM, 3G, UMTS, WiFi… Telecom operators and service providers have happily watched each of these acronyms become popular, each creating a new opportunity to sell new services and make more profit from customers.

While most people have already learned to live with the fact that some sort of acronym research has to be performed before being able to purchase or set up a new internet connection service, this process is one of the main barriers that deter many from the late majority or laggard groups from going online (still a huge percentage, if only we look at the diffusion of innovations theory), ultimately broadening the digital divide.

A lot of research is being carried out in the field of seamless network connectivity and handovers, network independence and the like, in order to realise a vision of always-online, where network specifics are completely hidden from the user, who just purchases connectivity anytime, anywhere, broadband. Users need not worry about connecting, signal, protocols or network addressing; instead, users want to profit from the services that give the network its full significance.

Soon, that vision will be fully realisable from a technical point of view; what remains to be seen is whether connectivity providers will be willing to do away with an acronym list that brings them regular waves of new profit just for the sake of their users.

August 27, 2007

The social winners’ route to mainstream

Filed under: facebook,social networks,twitter,web 2.0 — Diego Urdiales @ 0:06

Twitter in FacebookMuch can be said about two of the most successful so-called Web 2.0 applications in the recent months, namely Facebook and Twitter, but little of it negative. Twitter, the application that found its niche in amongst blogging, instant messaging, syndication and presence where no one thought there was room for one; and Facebook, the social network that is closest to becoming mainstream, according to its growth rate, thanks to the revolutionary opening of its platform for outside application developers: both prominent social winners.

Some of you reading this will already have contested my statement of Facebook “closest to becoming mainstream”, others will do now, if they missed it the first time. Social networking, you may argue, has long been mainstream with mySpace, for instance, which has well over 100 million registered users. That is a reasonable argument. I tend not to agree though, especially when I try to look at Facebook and mySpace through a user’s eye.

Most will agree as well that Twitter is not mainstream yet, even if close to being so. The same reasoning applies to this social winner as well.

I can see many active internet users, especially outside of the internet-savvy 13-18 year-old demographics, having a hard time understanding what Twitter is all about, simple as its functionality is. As brilliant a summary as its “What are you doing?” headline is, that is all that there is for users to grasp what the service is about at the beginning. It often needs the intervention of fellow users to sit together (physically or virtually) with the newbies and drive them through Twitter updates, friends, followers, mobile support and the like.

As for Facebook, the very concept of a social network is tricky for an average user, often overwhelmed at the beginning by the wealth of possibilities and applications that Facebook encompasses, and even more so by the apparent mess of the user’s start page. It is not until Facebook becomes a photo-sharing-site-with-more site, or a birthday-calendar-with-more site, or even a discussion-board-with-more site, to name just a few examples, that the user starts to grasp what Facebook is, and to begin making full use of its richness.

My argument is definitely not that Twitter or Facebook are non-usable or non-user-friendly applications. My claim is just that these applications, or any other, will not become mainstream until they are widespread enough (and make enough sense for people) for an average non-user to feel the need to be part of them, strongly enough to drive her through the learning curve. And this has not (yet!) happened with Facebook or Twitter.

Next Page »

Blog at WordPress.com.

Design a site like this with WordPress.com
Get started