Skip to content

Who needs shiny new data formats most?

It is perhaps indicative of the modern trend towards transient information – no doubt born of the Internet’s most persistent by-product that all information should be free (and therefore of no value) – that one of last month’s big news stories is now all but forgotten.

What is worse is that it would seem to have been deliberately forgotten by most of the world’s largest and most influential software companies. That possibility leaves suggests that they might just be showing far more interest in turning a Dollar now than in the future of information retention for the education and entertainment of generations to come.

That story, of course, was Google Vice President, Vint Cerf, stating that we were in serious danger of losing vast amounts of valuable data because of the dramatic pace of technology advances. In other words, the race to come up with ever-better ways to format and store data will not just obsolete old storage technologies and file formats. It will risk losing them – and the data using them – forever.

Two thoughts can be found lurking around this particular story. One is the fact that Cerf felt the need to make this point in the first place. If you think about it, data storage is an irrelevant pastime if every technology change risks making all previous storage unreadable and inaccessible. It should be such a no-brainer to ensure that every development comes with not just the necessary file reading capabilities, but also a routine that can search out those to-be-obsoleted files on any storage device the user has and reformat them without data loss or corruption.

The other is the deafening silence that seems to have come from the industry… no howls of anguish that yer man is well wide of the mark, no protestations that the means to solve this exist, no moves towards a lingua franca for each type of data.

Increasingly, it seems to me like they would rather get the money from getting (obliging?) everyone to change to the latest and greatest `cool’ storage formats and actually obsolete not only all the old ones but also the data they hold. It’s only old data after all: bring on the sexy, cool new data.

But it is not `data’, it is history. The smart-ass response to this, of course, is that human history shows that humans find it impossible to learn anything from history, so it is all irrelevant junk that can just be erased.

But sometimes I wonder just how necessary it is – to us as users – that technology marches on? To the vast majority of users, has the development road map between, say, Microsoft’s Word 3 and Word 2010 created too many absolutely new ways of writing down information? Yet there has been a new version to buy every couple of years.

For the software vendors this approach has the obvious advantage of keeping the revenue earning potential up, and most of the vendors have expended large amounts of money to keep users in the expectation that new technologies – that they `simply must have’ – will always be coming along.

To be sure, the thought has passed my mind that, as a Google VP, Cerf may even be on a marketing wind-up campaign for a near-future service announcement from his employers.

But does any of that answer the basic question of `why?’  Why do the users want to do anything with the technology? In terms of data storage that question becomes: `why does anyone want to save this stuff anyway?’

There are as many answers to that as there are people saving data, but I suspect none of them features words such as “because the storage technology is so damned cool”. And if any of them do have a transitory flash of such a thought, you can guarantee that they will be thinking it about some other storage technology next week.

In practice they want to save it because they know they will want to refer back to it at some time, either for education/re-education of themselves and future generations, or just the entertainment value of good memories. I can still read extracts of my family history from the pages of my father’s old family Bible, yet I can’t always find a way of reading files I created ten years ago. Only this morning my PC asked me how I wanted to view a plain text file – and mangled its on-screen presentation anyway.

So maybe there needs to be a rule – and certainly a rule that all customers need to apply to their future software purchases. No new storage format technologies will be bought unless there is clear, simple and non-obsoleteable  backward compatibility with all relevant storage formats. If one can go to the British Library reading room and work with books and documents of n-hundred years ago, then the technology vendors have to understand and accept their responsibilities to the future histories of their millions of customers.

One answer – and leaf they could adapt from the open source community – is that they form an industry-wide format compatibilities standards body, an organisation which not only recognises the problem but develops the appropriate technologies and `persuades’ vendors to offer and adhere to them.

Indeed, if they don’t do this they could drastically impede the development of – and need for – new applications, and versions of established applications. Once users start `losing’ data by having it in now-unreadable formats, they may well start sticking with the old applications………just like n-million businesses did with data written for XP-based applications.

Bet they don’t, however.

Posted in Business.

A role for women in tech – exploiting it

I have, for many years, held the opinion that the IT industry has thought so highly of itself and its technologies – for their own sake – that it has often missed understanding its real place in life and the real contributions it could be making.

What is more, the adoption of such a mindset means that it also misses the opportunity to exploit the talents of people who could exploit the capabilities if IT despite the fact that they eschew the glorification of technology for its own sake.

For example, the other day I got an invitation to attend a conference in the USA about Big Data Innovation. Well yes, the technology is arguably important, but only as a facilitator of something else; and let’s also face up to the fact that `Big Data’ has been an issue for IT systems since the earliest computers running just a few Kbytes of core memory, where programmers wrote whole business systems in Assembler that ran in 4k of memory.

The issue here, of course, is that the data is pretty irrelevant, in and of itself, no matter how much any business or individual has of the stuff. The real trick is being able to analyse it effectively in order to get something of value out of it.

Yet even those companies that are supposed to be the sharpest tacks in the box at such a game – the major retailers – show that despite having huge quantities of information available their ability to find anything of business value within it all borders on the naive. Take, for example, the following recent plea from a Facebook friend:

“Why does advanced analytics of my buying patterns always seem to suggest that I should buy more of something I really could only need one of and have just bought?”

Who hasn’t been there? Having spent time trawling round the web to find the best price and source for a product, and with the purchase made, that lucky retailer then bombards you with blandishments and requests to buy some more of that product.

OK, if the purchase was, say, a bottle of rare whisky, or some difficult to source food stuff, suggesting buying something similar might make sense. But it depends on context: if my purchase was for a Christmas hamper, sending out the suggestion that I buy another or three is pretty dumb.

One sees the same effect with the advertising on many popular social media sites. Having bought one of product X, adverts from ninety-three other vendors selling product X suddenly appear when you log in.

So, if analytics tools can’t do context by now they should be unplugged and consigned to a life as shelfware.

But I suspect most of them can, it is just that they are not programmed to do so. In other words, the real issue with Big Data and analytics is what questions are actually asked of it all. And the answer would, for now at least, appear to be: `pretty dumb ones’.

And that leads me straight to the next question: how to create the right questions? And the answer to that emerged during a panel session at last week’s Fujitsu Forum in Munich, where the subject of women in technology came up for discussion.

Part of the issue, according to Citrix VP, Jacqueline de Rojas, is the way the tech industry presents itself makes it appear an unwelcoming place for women to want to work, and the situation shows no sign of improving. Yet perhaps part of the answer to that problem is that very tech-centricity the industry portrays – that doing anything using tech is fiercely complicated and `special’ in some way.

That may have actually been the case once upon a time, but it isn’t now. Indeed, most of the `fiercely complicated’ is now automated out of existence as either a skill requirement or a problem. Now the issue is what you want to make tech do in the real world, not how it is done. Big data analytics is a very good example of this.

It is what questions are asked, and why they are important, that are the key issues here. The `how they are asked and answered’ is just an issue of increasingly automated facilitation. As these real world roles start to emerge so they will, I suspect, be filled by women. Formulating the questions that really need to be asked of big data is likely to be an area where women shine brighter than men.

Yes, I know, men will say that they are more logical and it is logic that is needed in formulating big data analytics questions. To which I would venture the response: `complete tosh’. It is exactly that logic which, I suspect, creates the assumption that having purchased one product X, the customer must obviously needs more product X.

On the other hand, how many men are there who have not faced the situation where a wife, girlfriend, sister or mother has, after listening to 20 minutes or so of male waffling, asked that one question which neatly nails said male to the wall? Not many, I suspect.

Women can be truly expert at hitting the real point of an issue, and could be an important secret weapon in exploiting the technologies behind Big Data and analytical tools. Indeed, when it comes to really putting the technologies to work in the real world women may well prove to be far better at exploiting the potential than the majority of men.

Posted in Business.

Office + Rackspace…..ticking the `Microsoft’ box?

The recent announcement by Rackspace that it has launched a new business unit – Cloud Office at Rackspace – in collaboration with Microsoft can be interpreted a couple of different ways. And depending on what comes out of Microsoft next month in terms of CEO Satya Nadella’s keynote presentation to its Future Decoded event at London’s ExCel Centre, those different ways may in fact be quite complementary.

The arrival of Cloud Office unit at Rackspace is, in many ways, not a dramatically revolutionary step. The company is offering users the chance to build out Office-based business services on the back of hosted Microsoft Exchange and Rackspace email services, Microsoft’s Lync and SharePoint collaboration tools and the Jungle Disk back up service.

These are, of course, all services that Rackspace has had available as `product entities’ for some time, and the one difference now is that its Fanatical Support service has now been extended to encompass increasing levels of consultancy capabilities. This represents a subtle but important change towards taking on the engineering of services for clients from the start, rather than the classic support model of picking up the pieces once the user’s own attempts at self-engineering have unequivocally expired.

So at one level this move can be seen as just a marketing exercise, packaging up what is already available into a more readily understood and digestible entity for potential customers. There is nothing wrong with that, of course, but it does beg a couple of questions of its own.

Does it mean, for example, that the price wars starting to build up in the datacentre infrastructure business – the selling of brute compute time – is starting to get painful? It would not surprise me if this is a component in the mix for Rackspace, though on its own it might prove to be a bit risky as a source of new business because it could be seen as trying to eat the lunch of some of its customers, such as specialist service providers already providing cloud services based around Microsoft Office 365.

Microsoft already has many partners – such as BrightStarr, Appura, and Carrenza to name just three – servicing the Office-oriented Microsoft cloud service market already. Rackspace will have to go some to compete on service capabilities and expertise, when up against specialists that have niche track records.

The other possibility, however, can be surmised from the fact the early responses to the previews of Microsoft Windows 10 are looking good. Add to that the fact that with its rebranding of Nokia mobiles under the Lumia tag, Microsoft would certainly seem to be showing every sign of being seriously committed to its future as a core part of an overall plan.

My last blog set out my thoughts on the possibilities that might stem from a combination of Windows 10, soup-to-nuts common software infrastructure from mobile front-ends, through the delivery of Office applications in the cloud, other applications servers and on to back-end, back office applications servers. The potential to give often very `small-c’ conservative users that have been deeply committed to Office for years a route out of their commitments to old on-premise Windows platforms and onto the cloud, could represent a big potential market that could tick a lot of boxes for them.

And let us not forget that there is also Microsoft Dynamics, which can ERP, CRM and other serious back-office tools into this mix. A cloudy-Office front end could give those tools a significant bit of synergistic leverage into the cloud marketplace.

So does it follow that this new development from Rackspace, packaging up mostly existing offerings into more of an obvious meal rather than a bunch of ingredients, that service providers are being persuaded Microsoft might just be about to get it right? It is certainly possible, and it would certainly make for a synergistic marketplace.

Literally millions of user businesses would have a way forward into far more flexible and agile operations without having to abandon the existing processes, operating methods and staff skill sets that they have built up over the years…..well, at least not until they feel ready to do so. Microsoft gets to keep its incredibly strong hold on the heartland of day-to-day business operations, and the service providers get a wide range of entry points into the market – from providing an optimised set of core hardware and software resources for specialist service aggregators through to becoming frontline full service operations in their own right.

To add a touch more grist to this mill, news has arrived that Microsoft is announcing at its current TechEd conference in Barcelona a number of developments that point in this general direction. For example, there is now to general availability of Office 365 APIs for mail, files, calendar and contacts. These should allow developers to aggregate applications together to build more comprehensive services out of known and understood entities.

Also by early next year the company will roll out built-in MDM capabilities for Office 365. These will  allow organisations to manage data across a range of phones and tablets, including iOS and Android devices. Furthermore, Intune will soon offer application wrapping for customers’ line-of-business apps and new mobile apps for securely viewing content.


So, the groundswell underneath a possible resurgence of Microsoft as a major cloud services player (as opposed to just a vendor of `cool cloud technologies’) may just be really starting to move. If this is not what Nadella talks about in London in two weeks’ time, then it may be time for business users to seriously look elsewhere…….with some urgency.

Posted in Business.

Soup-to-nuts, but Microsoft must market it right

The publicity image attached to the latest Microsoft announcement of Windows 10 suggests that the target really is a soup-to-nuts solution that aims to provide a complete, seamless, no-differences environment from the smallest usable mobile device through to the most powerful desktop workstation.

If it really is the full dinner – a decent Prosecco to start, a light but tantalising aperitif, a rich and memorable entre, perhaps a dessert (I don’t really do desserts), cheese and nuts, all washed down with some good, honest wines – then the company may be in for a renaissance in its fortunes and image that hasn’t been seen since IBM’s $4 billion one-year loss some 20 years ago.

And if it is successful it could even be the last `version’ of Windows to ever appear.

If it fails, however, this will almost certainly be its last hurrah, and while it may not `crash and burn’ it is very likely to fade away till it can be acquired by a private equity shop to be broken up for what can be salvaged.

Only time will tell the final outcome, of course, but the ball is very much in Microsoft’s court. It still has a great deal of `legacy’ rolling in its favour, particularly with Office in the business world, and it now all depends on how the company is not only planning to exploit that legacy, but also how it executes. The one thing it has to do, at the highest corporate level, is shake off once and for all the assumption that the cloud is just another way of shifting applications `boxes’.

Microsoft is certainly not alone amongst the established software vendors in making this mistake, but nothing could be further from the truth. The cloud is about delivering services to end users in a form that they can exploit with the minimum of fuss and the maximum of value. To that end, a soup-to-nuts work environment (and to call it an `operating system’ is – or damned well ought to be – now a grave misconception) has the opportunity to provide one of the most important services of all.

That is the ability for businesses to run their known, trusted and valued applications on any class of platform that is suited to the users’ needs, be that mobile, tablet, laptop, desktop, super high-end desk-side workstation or virtualised instances run back at the datacentre. The choice must not restrict the user from doing what is best for them and the tasks confronting them. And the actual choices here, in the end, not about the platform but about the application, for that is what the tasks are built upon.

Here is something I wrote in Cloud Services World over a year ago, when it was announced that Microsoft was acquiring the mobile phone business of Nokia.

Such a package could bring a real solution to the dilemma on BYOD that many businesses express – `damned fine idea but the organisational, operational and security issues involved are still far too scary for us to seriously contemplate it’.

Here is a way of delivering solutions to most of those problems in one hit – something that appeals to the natural conservatism of business users.”

Those problems have not really gone away yet, and now the CIO’s desks are cluttering up with exultations to buy an increasing range of point solutions to the problem. So, if they know that the business is, and has been for years, geared to running Microsoft Office applications and other tools well-integrated to the Microsoft environment; and if those applications, in their latest iterations, run on Windows 10 and in the cloud on services such as Office 365, Windows Live, Azure or any of Big M’s service provider partners; and if they just run on any form factor that suits the individual end users’ needs and requires no `engineering’ to get that form factor integrated into the corporate environment…..?

If the engineering costs of moving to the cloud can be slashed, if the user education/training costs can be slashed (because it is the same app on everything/anything) and if the data formats etc are either the same – or any differences are automatically accommodated and therefore invisible – this could be a significant godsend for many CIOs and IT managers. It can be a very significant task that they don’t have to think about and plan for, and the operational continuity with the here-and-now could make them a hero with the business managers.

Another cost-saving – OK defraying from one budget to another – is a possibility I also raised last year. This is the notion that as part of the overall service, Microsoft would work with service provider partners, especially the bigger Telcos, to provide users with devices, the cloud services and additional applications as part of a single contract.

In other words, just like with mobile contracts today, businesses and users would not need to buy their hardware, they just get everything they need as part of `the service’.

And lastly, that remark about it being `the last version of Windows’? Coupling a cloud-oriented, soup-to-nuts environment with the coming of Continuous Delivery (CD) of software upgrades – where tweaks, adjustments, additions and, yes, removals of code are uploaded to end user systems as soon as they are ready – means that everyone can always be on the latest version. And because it is a `drip-feed’ of upgrades the bouts of dread and nerves that accompanies Patch Tuesday can be set aside.

With early reports of Windows 10 suggesting it is stable and `works properly’, everything now is likely to depend on how Microsoft goes about the marketing. If it tries to sell this as a `product’ it will almost certainly be doomed.

Posted in Business.

Is Tibco a worrying sign of a different malaise?

The news that Tibco’s sales and profits are down has left financial analysts wrong-footed, resulting in them being instrumental in seeing the value of its shares fall by over 4 percent.

It has also started to beg an important question: should Tibco follow Dell’s example and jump from the stock market casino ship in order to get itself through the fundamental business changes it – and the rest of the software industry – now faces? Indeed, is it time for all software companies to do the same?

I accept that the phrase, `being instrumental’ is quite a significant accusation but I would contend it has some serious foundations. The fact that so many stock traders, especially in the USA, follow what the financial analysts say is now becoming a danger to the software industry generally. The reason can be found in this judgement on Tibco by Trade-Ideas LLC, which has branded the company as a `roof leaker’.

This is defined as `a stock worth watching because it begins to experience a breakdown which can lead to potentially massive losses. Once psychological and technical resistance barriers like the 200-day moving average are breached on higher than normal relative volume, the stock may then be subject to emotional selling from investors that can continue to drive the stock lower.’

And thereby lays a deadly combination for all software companies. The combination of financial analysts and emotional investors will necessarily be volatile, and if the analysts do not understand the major changes an industry sector is undergoing, then many of the businesses in that sector may find themselves doomed, even if they are all starting to go in the right direction.

In a company statement Tibco Chief Executive, Vivek Ranadivé, attributed the revenue drop to the company’s move to a subscription-licensing model. Given the way that cloud-delivered services are becoming a mainstream option for the way businesses consume the capabilities of software – and heading towards a largely dominant option in the near future – moving to the subscription model is an inevitability for all software vendors, whether they like it or not.

The real trouble here is that financial analysts are showing continuing evidence of not understanding what is happening in the digital-business marketplace, especially when it comes to the switch from licence sales to the annuity model of subscription licensing for revenue generation. This issue will affect all software vendors over the next five years, and has been eminently predictable over the last five at least. It must be at least that long since I first wrote that it would happen.

The problem is simple: traditional licence sales generate hugely positive revenue `hits’ on the bottom line of software companies. That makes it easy for the analysts to determine successful ones, for their bottom lines keep growing. It also makes it simple to observe failure, for the obverse is true, and predict future success or failure from any trends between those two points.

But the annuity model – paying either a regular monthly subscription or a direct pay-per-use model – generates a regular monthly income that, hopefully, grows as more customers start using it. This means revenue is more predictable, if more slowly accrued. The downside, however, is that the switch between the two revenue models has the inevitable hole that all financial analysts seem to fall in. That huge, upfront revenue hit declines, and then disappears.

So a business can be actually growing  – as a business – while appearing to be on the fade in terms of purely short-term financial numbers.

And the short term emotional mindest of stock traders that follow the analysts is that the last one out of a stock where numbers have negatively changed (even for a predictable and understandable reason) is a sissy – and even worse, a fool.

This decouples investment resources from any real view of what is now happening in the marketplace. It also means that what is happening to Tibco is going to happen again and again, and the bigger the `name’ the harder the analysts and traders will make those companies fall. Some big names are likely to hit this trap and be bombed severely, leaving them prey to the hedge fund and equity management cowboys who will happily asset-strip them, leaving them as smoking wrecks.

And it is to be noted that CEO Ranadivé is already finding himself pushed by investors into considering partnerships with, or selling off at least parts of the business to, hedge funds.

I have two suggestions on what not just Tibco but the whole software industry needs to do about this as a matter of some urgency.

The long term option is difficult. I have written before that the industry really needs to start educating the financial analysts before they wreck the software business permanently. And they will: they will create stocks worth only `fire-sale’ values, so that valuable technologies and IP will drift away to geographies that will enjoy having western businesses and economies as supplicants.

The short term option, and one I would suggest to Ranadivé  if he were here now, is to follow the route taken by Michael Dell and pull the company out of the stock market. That way it can be re-built in the light of the drastic transformations software and services delivery is taking, without having to explain every step to people operating within a three month event-horizon.

In fact, I would recommend that action to every software company in world.

And as a final irony, Tibco has just been named by research company, Gartner, as a leader in its 2014 Magic Quadrant for Social Software in the Workplace – the very type of software that is paid for by subscription or pay-per-use.

Now, regardless of anyone’s degree of scepticism about Magic Quadrants, this simple detail says much about the level of misunderstanding the financial analysts have about what it going on now.

Posted in Business.

The white van movers

A particularly good example of how cloud service providers can become the glue, not just for linking partner businesses technologically but as a core part of on-going business management and the creator of new business opportunities, can be found in NetDespatch.

The company is also something of an object lesson in why exploiting cloud services has far less to do with any technology and everything to do with identifying where the real major business advantages lie.

The company provides a SaaS logistics management service for the parcels courier services world, or as CEO Becky Clark puts it: “we provide the data stream underpinning all their activities. We facilitate the process by providing all the back office functions that are required.”

This may at first seem a rather mundane marketplace but, as Clark points out, it is in fact huge, and global, with estimated total world revenues of $1.2 trillion last year.

“We now manage the shipment of millions of parcels a month, and currently have 130,000 retailers worldwide using it for free,” she said.

That last observation is the key to the issue of identifying the real marketplace for any cloud service, for it is not always the obvious target. It would be easy to assume that it would be the retailers which have the most vested interest in signing up for such a service, as they are the ones who need to organise the physical link between themselves and their customers. But in practice, NetDespatch identified that the real target market for its services are the couriers, the owners of the millions of vans and lorries scampering around, 24 hours a day.

By using NetDespatch, they not only get a centralised logistics management service, but also get the chance to add services provided by NetDespatch that they can sell on to retail customers as value-adds to their core offerings. And as it is a cloud service, the costs are minimal and on a pay-per-use model.

The core NetDespatch service , therefore, can also act as the backbone of a service aggregation environment that the couriers can build up and offer to their retail customers. It provides a full end-to-end logistics management service, but also comes with a variety of client services.

The company has also already formed partnerships with many of the leading SaaS front office and business management providers, such as Netsuite and Salesforce.

“This is the obvious route to take,” said Clark, “each providing their core skill, especially when it comes to omnichannel marketing which is now becoming a key tool for retailers. Why should we try and learn to build systems they already provide? Using the cloud makes this very easy.”

It is also a service that is equally at home with small carriers and some of the largest: Royal Mail’s ParcelForce, for example, is a user. Straddling the small and large carrier divide is another customer, APC in the UK, which is in fact a cooperative of 120 individual courier companies across the country.

“That would be impossible to do without the cloud,” Clarke observed.

The cloud even allows the company to work with some unusual business models. For example, the smallest start up courier will pay the most, though that is still only 25 pence per parcel. As such companies grow, however, the cost per parcel comes down.

“We want to push out customers towards growth,” Clark said, observing that the cloud is such a good environment for start-up businesses that the top price NetDespatch charges per parcel is still very low. In addition, as a pure-play SaaS provider, starting up a new customer on the service can be achieved in as little as 10 minutes.

The company is now looking at additional services it can provide for its customers. For example, it has now accumulated some 10 years’ worth of data on parcel deliveries and is now looking at ways this can be exploited. So Clark is now starting a SaaS-delivered analytics service for couriers and retailers. This will look at such areas as consumer purchasing patterns and aim to model them so that customers can identify and plan for future business opportunities.

In this way the company is something of an object lesson on how SaaS can be core part of entire industries and the glue that holds them together. It provides companies whose core business is far removed from IT or the glitzy financial, media or knowledge industries with the underlying skills and services they require, plus additional services that extend their `stickiness’ to retail customers.

And as the wealth of data generated mounts up, the ability to sell analytics services based on it becomes another revenue stream to exploit.

Posted in Business.

Is this the most important development yet?

Seamless device-to-device connectivity between whatever devices a user feels should be seamlessly connecting is an obvious goal – for the users, at least.

It is still the case, of course, that the majority of vendors consider this notion the ultimate horror, for it denies them the chance to `own’ as many user process steps as possible, even if their solution to any particular step is next to useless.

To the vendors, it has become traditional that `proprietary’ is seen as perfect. So it is good to see a group of them biting this particular bullet in an appropriate fashion, by establishing the Open Interconnect Consortium to advance interoperability for the Internet of Things. If the bite is hard enough, it could create one of the most important developments – both physically and conceptually – that users have seen in a long, long time.

The consortium has been set up to define the connectivity requirements needed to ensure the interoperability of billions of devices projected to come online over the coming years – from PCs, smartphones and tablets to home and industrial appliances and new wearable form factors.

If the results of its labours come to positive fruition, this could be one of the most important developments to occur since the cloud emerged, as the core benefits of the cloud – interoperability and collaboration between any and all applications and services, and by definition the devices on which they run – are the keys to an information management and utilisation environment whose potential can only be imagined at the moment.

It intends to deliver a specification, an open source implementation, and a certification program for wirelessly connecting all such devices. The first iteration of the code will target the specific requirements for smart home and office solutions, with more use case scenarios to follow.

The launch members of the Consortium include a good collection of big name industry players, including Atmel Corporation, Broadcom, Dell, Intel, Samsung, and Wind River. They are joining forces to focus on defining a common communications framework based on industry standard technologies to wirelessly connect and intelligently manage the flow of information among personal computing and emerging IoT devices. This will be regardless of form factor, operating system or service provider.

Member companies will contribute software and engineering resources to the development of a protocol specification, open source implementation, and a certification program, all with a view of accelerating the development of the IoT. The OIC specification will encompass a range of connectivity solutions, utilising existing and emerging wireless standards. It will be designed to be compatible with a variety of operating systems.

According to the OIC, leaders from a broad range of industry vertical segments – from smart home and office solutions to automotive and more – will also participate in the program. This will help ensure that OIC specifications and open source implementations will help companies design products that manage and exchange information under changing conditions, power and bandwidth, and even without an Internet connection.

It is certainly to be hoped that these representatives both become the real driving force behind the standards that are set , and understand the power and influence that is now being offered to them.

The first OIC open source code will target the specific requirements of smart home and office solutions. For example, the specifications could make it simple to remotely control and receive notifications from smart home appliances or enterprise devices using securely provisioned smartphones, tablets or PCs.

Possible consumer solutions include the ability to remotely control household systems to save money and conserve energy.

In the enterprise, employees and visiting suppliers might securely collaborate while interacting with screens and other devices in a meeting room. Specifications for additional IoT opportunities including automotive, healthcare and industrial are expected to follow.

Posted in Business.

Raw’s `democratises innovation’

In many ways, the following is the complement of the previous blog entry, where Tibco’s CTO, Matt Quinn, discussed the coming changes in cloud infrastructure. For this is about the changes in applications development – and the move towards continuous delivery of applications. It also demonstrates the increasingly interconnected nature of all the developments now occurring in and around everything that could be labelled as `cloud’.

It first appeared in Cloud Services World a couple of months ago but I feel it is still a coming trend, rather than part of the mainstream. I had gone to meet up with San Francisco-basedRaw Engineering’s CEO, Neha Sampat, and COO, Matthew Baier at Cloud World Forum, with the objective of finding out what its primary product,, did for businesses. The first issue, therefore, was to try and pin down just what it is.

After some time spent brain-storming that subject, the best we got to was: `a business focused, policy-driven, analogue of Visual Basic’.

For any business people who have no idea what Visual Basic is, it is an applications development modelling tool with which developers can visualise the process flow of the application they have in mind, and build that visualisation on-screen by connecting together blocks of process functionality. follows that concept, but is now targeting the tech-savvy business users as well as developers. It also advances the concept a good deal, in that those applications processes are far more rich and complex, especially when it comes to building business applications that integrate with mobile applications and services.

This, therefore is getting ever-closer to the situation where, as part of that need for continuous delivery of developments – and here `developments’ are often small tweaks to the code to improve an app’s operations, or perhaps a temporary adjustment to meet the needs of a specific short-lifecycle project – where those tech-savvy business users really can start doing it for themselves.

It also makes it a development environment that seems tailor-made for use by third-party channel and integrator partners. These business are ideally placed to exploit the need for continuous delivery, for they are more likely to detailed understanding of specific market sectors and their business processes. Baier indicated that this is now the direction the company is heading.

“The company started as a consultancy, specialising in mobile applications,” said Sampat, “and we soon found that with every project we were building the same back end stack. So now we have made it a product that others can build applications on. It is like Visual Basic for big users, and we see it as the democratisation of innovation.”

The system currently works for apps development with Apple’s i/OS and Android and according to Baier, there are no technical issues about producing a version that works with Microsoft Windows 8. “It can be done easily once there is demand for it. We are juststarting to see that happening,” he said.

The company has also just formed a partnership with AppGyver which takes this Visual Basic analogy even further. AppGyver is a provider of innovative front-end development tools, aimed at bringing rapid visual mobile app development to the enterprise. Using it, companies can now create sophisticated enterprise mobile applications in minutes, instead of weeks or even months.

In partnership, Raw Engineering get a richer apps development front end for, while AppGyver gets a persistent backend for applications designed using AppGyver’s Composer tools, plus a data store with enterprise-grade security model and fine-grained access controls and direct connection between Composer’s UI components and’s database for application scalability and persistence

“There’s a huge need for our services in the enterprise,” said Marko Lehtimaki, founder and CEO at AppGyver. “We’ve been working closely with to more broadly address enterprise requirements with things like extended security, so businesses of all sizes can take full advantage of our app development tools.”

This combination maps even more closely onto the growing DevOps trend now sweeping across enterprise applications development as businesses realise the old development cycles, where an application could take a year – and often more – to reach production status, are no longer workable. With enterprises requiring much greater business agility, development cycles need to be brought down from years to days. It is now increasingly the case that the lifecycle of an application – from conception to final termination – is measured in months or weeks rather than the decades that were common in the time of legacy on-premise applications.

“ has been developed with these shorter development and lifecycles in mind,” Sampat said. “It has been used a lot to develop management and information handling applications for exhibitions and conferences like this one where the lifecycle for some parts the applications really are just the two-days of the event itself.

“The fastest applications development cycle we have seen with has so far been just four days, but the typical improvement we find with customers is that an application which would have taken 12 months can now be completed in one month,” she added. “This is because they are working with ready-made components.”

Posted in Business.

The future of cloud is small, lots of small

Are the days of the large, powerful server and the complex, all-encompassing operating system finally about to come to an end? Is the ultimate architecture of cloud delivery systems going to be something rather different – perhaps many millions of small micro-servers running single function applets rather than huge brutes each running many tens of virtual machines?

One person who feels that this is the way cloud delivery is moving is Matt Quinn, CTO of Tibco, who sees a future coming where the operating system is replaced by the browser, and the server needed to run it is a single-core `shadow’ of its current self.

“There are some developments going on where it becomes possible to run all the functionality needed to run an app inside the browser,” he told me at the recent Tibco Transform event held in Paris. “That raises the possibility that the operating system as we know it now becomes irrelevant, which also means that the large commodity servers running lots of virtual machines that are common today will also become irrelevant.”

This could have stayed an interesting item of speculation about the future had it not been for a passing comment later made by Toby Owen, the Head of Technical Strategy in EMEA for Rackspace. He referred to the company’s acquisition last year of a company called ZeroVM.

“This uses a container-based approach that won’t need an operating system or a hypervisor,” he said. “It is not a product yet, but we do plan to introduce a sandbox environment soon so developers can investigate it.”

The software foundations for the fundamental change Quinn foresees are therefore very much in place already. The next question, therefore, is what can be expected from the change?

This can be divided into two areas: the hardware and software architectures, and the operational uses and changes that become possible.

In the architectural camp the main change will likely be the demise of the large servers running many VMs. Those machines may be considered commodity items, but they are still not cheap to buy, and the bigger they get the more complex the operating system and VM management environments have to get to make them work even remotely efficiently.

It would not surprise me if in the near future the issue of poor server utilisation raises its head again as the machines end up spending more and more of their resources managing themselves rather than doing productive work.

Instead, the servers will become tiny machines, maybe even using just a single processor core and capable of running just a single process at a time, inside a browser rather than an OS. How small will they be? Well, let us not forget that last year Intel demonstrated a single chip PC in an SD format mounting. Given the very nature of semiconductor production processes, having proved it can be done it becomes possible to make them in huge volumes – and the bigger the volume, the lower the unit cost.

Rather than datacentres boasting of having 4,000 servers available, how about boasting of having 4 million, or 40 million?

As for managing that type of environment, most of the concepts are already well-established in the form of parallel processing environments in the world of supercomputing. Managing many thousands of process threads, simultaneously, is meat and drink to that world.

And there is already a growing realisation that one of the underlying process management changes that comes with the cloud is that the lifecycles of applications are getting shorter. Long gone are the days of 18 months of requirements planning, five years of coding and testing and `n’ years of work in production. Instead, the lifecycle of an application – from conception to termination – can be measured in months, often weeks now. Soon enough it will days or hours

They will be designed to perform just one specific task `now’, not cover off every base of possibility for the next five years. The lifecycle will be: `create -load – do the job – out – deleted’.

That does not need huge servers, or complex operating systems. What it will need instead is an over-arching, policy-driven, analytics-based, event management environment to oversee the running of processes in order to achieve a business object.

When it comes to what such an environment could achieve for business, the obvious starting suggestion is cost savings. This has always been the first suggestion with cloud, (years ago I thought that too) yet in practice it never works out that way. At one level, using one SD card `server’ for a day in a box of several hundred thousand will probably be measured in fractions of a penny, but you will end up using many more than just one, I am sure.

So forget costs as an issue and instead think of revenue and profits. They come from greatly increased flexibility and agility. If one of these servers can load a function applet, run it, transmit the result, clear down, and be loading the next applet, all in half a second, the scope for its application really is only limited by the imagination of those applying it.

The practical upshot of this is likely to be that future apps will be written by the end users  – where `written’ actually means something  like `conceptually outlining it’. As Quinn observed, “we are reaching a stage where all applications will be in beta. In practice, by the time they reach a point of being `finished’ that will probably be the time they are killed off.”

That means the technology underpinning business processes will be able to change as rapidly as business people want to change processes. And getting a process `wrong’ will no longer presage the imminent death of the business, for each problem app will be small, and even if it cascades across many servers and processes the policy management system will probably spot the problem, stop it running and flag the issue to the creators. A good one may even suggest a remedy.

Security too becomes easier to manage. If all of these tiny servers are single function and running the same browser, defending it against malicious attack becomes easier to manage. And if something malicious does get in to an individual server, it becomes easier to isolate and destroy. The management environment would probably end up achieving that without operators or users noticing anything had happened.

That management environment would become an obvious target for hackers – but it would be a centralised resource and therefore easier to defend. And in addition, as a policy-driven system it would largely self-defend anyway……. “am I supposed to be doing that?…..  No? …… Kill it then.”

This is still, of course, largely speculation on my part, but it is now clear that something like this is starting to roll.

Posted in Business.

Now homes can look after themselves – and you

Since at least the days of the UK Government-promoted Year of Information Technology , back in 1982, the notion of controlling the functionality of a home – remotely setting the temperature of the central heating, turning on the oven to start cooking a pre-prepared dinner and many other domestic tasks – has been a long-held goal.

The US electronics giant, Motorola, had at least one fully automated, experimental house out in the Arizona desert at around the same time. And now, many eyes are being turned again towards the USA as the talk is of Apple coming up with a domestic control system that users could manage using their iPhone.

This level of interest in things domestic comes at a time when the combination of domestic management and cloud services make for interesting possibilities. It also comes at a time when a Zurich-based company, DigitalSTROM, is already starting to build a market for just such a system. It has already gained good traction in Germany and Austria, and will be launching in the UK and other European countries later this year.

The key to the system is a simple device that looks like a cable terminal block connector. This is not that simple, however, for it contains a programmable switch/controller for the device it is associated with. This is an addressable switch that allows the device to be both individually controlled and operate as part of a group. The classic application here would be lights operated in clusters with control coming from movement sensors.

It forms half of a master/slave architecture, with the master being the DigitalSTROM meter, which is both energy consumption meter and communications master for each circuit.

This technology is in practice not too far removed from what was conceived in 1982 (and demonstrated in the Barratt Homes demonstration house at the Ideal Homes Exhibition that year) though it is significantly more comprehensive in its capabilities from what was achievable back then.

For example, the master controller managed the operation of all instructions, but as part of that process sets them against a set of rules designed to ensure the safety and security of both people and property.

The rule with the lowest priority covers energy consumption. Next is occupant comfort, followed by occupant privacy. Then comes damage protection and finally the highest priority rule of all operations, occupant safety. The parameters of each test can be adjusted to suit the preferences of the occupants. The master system can also orchestrate services such as the clustered operation of lights.

What makes DigitalSTROM different is its use of the cloud to integrate the house, the occupants and the rest of world into a virtual entity, with complex management and orchestration of operations. The company has partnered with Tibco to provide what is, in effect, a domestic Internet of Things (IoT) environment for each house. This exploits Tibco’s capabilities in data collection, collation and management from all the individual device controllers, its data analysis capabilities based on both the collected data and external data feeds, and its event management capabilities to control what actions occur at the house.

As an example of what this might mean in practice, the occupants of a house may have set a programme of house events for the day ahead, such as what temperatures should be maintained and when (which could include extending awnings over windows to keep direct sunlight out) and what time the slow cooker should go on and at what setting.

But external data feeds, such as weather forecasts and local weather readings, could then indicate that a storm was coming, while local traffic information shows extensive traffic jams in the area. The system can then manage the retraction of awnings (a major possible damage risk in a storm) and the re-timing of the slow cooker because of inevitable delays in occupant arrivals.

According to DitialSTROM CEO, Martin Vesper, this capability can be built up into complex services and events. He also noted that it can then help with informing the public about a wide range of information that they might well find useful in their own lives.

“For example,” he said, “we have developed a service that posts information to Twitter about individual devices in the home, such as the real energy consumption of whites good like fridges or cookers. This is then compared in the Twitter posting with the manufacturers’ published data on energy consumption for those devices. We are calling the service `Truth or Fiction’.”

This cloud-based approach also starts to open up new service possibilities, both for DigitalSTROM itself and other businesses. For example, Vesper is already considering the possibilities of accumulating anonymised data on locations, houses and individual products that could form the basis of centralised statistics service for the insurance industry.

It could also be used to generate valuable information for local maintenance service providers such as electricians and plumbers. They could access the house data and get a complete picture of what installed, where it is in the house, what has broken and what is needed to effect a repair.

It might also be possible for occupants to take their preferred settings – their preferred environment – with them if they move house. They could simply load in their own data to have the house reset itself to the new parameters.

In practice, howevermVesper sees a more immediate implementation for this type of capability.

“We are already talking with hotel chains about setting up systems where regular customers can have their preferred room environment, which could include things like TV channel management, preset for their room when they come to register,” he said.

The possibilities for integrating a house and its occupants with their environment, both locally and more widely do seem to be extensive. They also open up a wide range of business opportunities for many service providers, so long as they have access to the cloud.

This was first published in Cloud Services World in June 2014

Posted in Business.