As you know, mobile web is hot, hot, hot and mobile might very well replace desktop in the next few years (the jury is out on that one!)
Some latest stats to support this trend:
However, there is one slight problem which needs to get fixed before reaching this stage: Performance. I guess you’ve noticed, performance on mobile is not all that great compared to the performance you get on your desktop’s browser. As this infography illustrates nicely, users have high expectation and mobile performance is still disappointing to most of them (I’m with them on that one! )
I’m done watching most of the videos from this year’s Velocity conference and I couldn’t resist but to share this one with you. John Rauser’s keynote “Look at your data” was definitely one of the most popular. A very good reminder to dig into actual data to find performance problems. Summary statistics never tell you the complete story!
You can also take a look at John’s workshop on statistics “Decisions in the face of uncertainty”
Cloud Computing brings many promises to the Enterprise: Elasticity and scalability, lower costs, accessibility, agility etc. But these new virtual environments brings their load of challenges due to the almost infinite resources they make available:
- Application architectures need to reinvent themselves to take advantage of the whole cloud computing ecosystem.
- Data manipulation and aggregation is no longer limited by the amount of CPU power you have available but how do you handle such amount of data? Do we need new data models and algorithm?
- How do we ensure application perform under these new environments? How do you monitor your performance when dealing with multi-vendors environments?
- Do clouds meet customer expectation? What does it mean for SLAs?
Following my article on high performance at massive scale, I’ve started to get really interested in the type of distributed databases the big web players are using to handle current and future volume of data. Some of the products developed in my current organization have to cope with large amount of data (personal, financial, marketing, etc.) with an increasing need to aggregate and link this data to get the most complete picture about individuals and businesses. This is especially true in credit bureau, fraud detection, Marketing, customer management etc.
Performance and scalability are high on the list of top requirement from the customer we deal with. The largest financial institution use our products at a global scale with access through the web. They expect low latency and of course a solution that can cope with future volume of utilization.
There are today 2 Internet giants who are facing scalability challenges every single day: Google and Facebook. I’ve had a chance to touched on some of these challenges talking to some of the Google engineers last week during GTAC 2009. Google is approaching the crazy number of one million servers so you can bet they have to be creative to handle their data. And deal with the carbon tax.
As I was reading this excellent article from Matt Heusser, I couldn’t refrain from chuckling as I could relate a lot from what I was reading as an ex-software engineer, tester and now in a management position. As I’ve experienced most aspect of the spectrum, I can claim some objectivity and wanted to add my 2 cents to the article.
Interesting move last week from Apple ! They’ve made the source code of Grand Central Dispatch available under an Apache open source license. This is a new technology introduced in Mac OS X 10.6 Snow Leopard trying to help developers deal with growing multi-core requirement. In a nutshell, GCD is an abstraction of threads management which allows programmers to deal with them at a much higher-level. It introduce a pool mechanism and allow specific tasks that can be run in parallel to be queued for execution. A monitoring and scheduling mechanism allow them to be executed in parallel on available core.