Skip to content


Thoughts to start a year by

Everyone and their dog seems to be in on the game of making predictions for what is going to happen in the world of cloud during 2013. So after a few minutes cogitation I decided to join the fray. Here, therefore, are six predictions for the cloud sector that I expect to have a growing importance during 2013.

The Tech-Upgrade Cliff

I wrote about this in Business Cloud 9 recently, and will re-publish that piece here shortly,  but I still feel it will be one of the major drivers of a cloud-wards move, particularly for large enterprises, over the next year. At a time of austerity this has one key pressure point – serious reductions in capital outflow.

The Cliff itself will be the cost for many businesses, and large enterprises in particular, to upgrade their current on-premise applications as they either fail to provide the new capabilities the businesses require, or meet their official `end of life’ as a corporate decision made by the softare vendors.

The background to this is the current economic austerity, where many businesses have opted to `sweat’ existing assets to the full rather than invest in applications upgrades deemed marginal or unnecessary. Many of these application are now, however, coming available with upgraded capabilities that the businesses will want to use. Many businesses will, however, still be using versions that are now two or even three steps away from the new version.

They will not, therefore, qualify for the usual discounts on licences they would have been offered if upgrading sequentially and on time. Neither will they be able to upgrade directly from `version N’ to `version N+2’. Instead, they will be obliged to buy – and install – all intermediate versions, and pay full list prices.

Is expected that many vendors will also `sunset’ those older versions of their applications, cutting away all but the most critical levels of support (ie major security patches etc).

Users will therefore face significant direct costs in upgrading, plus all the disruption and risk involved in porting to new versions, implementing required staff training processes and a panoply of other issues.

The alternative will be to seriously consider biting the bullet and moving many of those applications to the cloud. Some of the major software vendors are already offering more of their applications suites as cloud services, which is a good start point, but there are also now a growing number of SaaS services appearing – and continuing to appear – that can match the functionality of the legacy on-premise applications, and provide all the appropriate levels of security required.

I believe the latter half of 2013 will see this trend start to take off.

Start of a `Post Computer’ World

Cloud-delivered services carry with them an important difference – the focus of attention on the `service’ being provided, what is actually required from it and what benefits it brings to the business as a whole. In many ways, the means of delivery is less relevant.

With IT, however, there still remains a great deal of emphasis on the `T’ – technology. This means technology for its own sake.

But this is the year that is likely to change, a change that will bring about the demise of `the computer’ as a definably important `entity’ in its own right. The technology will `disappear’ in terms of its prominence, becoming like a pencil – important that it is available for use by a writer or artist, but of little intrinsic value in itself, if only because of its commodity status and commonality of a design that is fit for purpose.

Computer hardware technology is approaching this point, and software is following along quickly afterwards. There will still be people needed to cut original code – though automated tools to do increasing amounts of this work have been around since Microsoft’s Visual Basic appeared. But the levels of abstraction in what constitutes an `application’ will continue to move away from code per se. The analogy is that we think `motor car’ and think not once about piston rings unless they are not there. That model will increasingly apply to the world of the computer.

A key driver of this is that, as applications and services become more commoditised, they will need to be more widely used – and used by people who know damn-all about `computing’. And that is the way it now should be.

The cloud will play an important part – indeed already does – in the delivery of these services, and its role as the backbone of that process will become the accepted level of understanding that most users will require. As a result, 2013 will see the service provider becoming the base-level brands for most users.

It’s all just `Cloud’

This could, alternatively, be referred to as `Private Cloud’s Last Stand’. One issue that does, I feel confuse potential cloud users more than anything are the arbitrary, and largely artificial, divisions of `cloud’ into sub-groupings. 2013 will probably start with even more sub-groups appearing all aimed at identifying ever-smaller market niches. One of the driving forces of this trend is the need for legacy applications vendors to maintain a market place where they can be recognised and claim `market leadership’.

But as the year goes on I expect to see business users become less concerned about the sub-group of cloud services they belong to and more concerned with the benefits and value they are getting from their services, allowing those criteria to be the arbiters.

Private Clouds will, I suspect, be found to actually add little of benefit to large enterprise operations, and to cost at least the same – probably more – than existing legacy on-premise operations.

This trend will even come to cover cloud-specific divisions such as SaaS, PaaS, social media and the rest. The question will change from `which tech approach is best?’ to `where do I source the most appropriate service for this problem?’

Governance and Data Sovereignty Issues Diminish

This will happen as the ability to create domains of securely managed and controlled environments in any 3rd party datacentre increases, and such tools are now available.

Data sovereignty issues will become a problem of the past. The tools already exist that allow a logical implementation of secure environment A, running on datacentre B in country C, to be re-created, managed and run as secure environment X on datacentre Y located in country Z.

It is certain that the bodies most likely to have trouble with this notion will be the sovereign nation states themselves. Countries like Germany are particularly strict about this. 2013 will see, however, that such strictly applied rules, while sensible in the earliest days of internet-based transaction processing, are now increasingly based on emotion rather than logic.

With German businesses already starting to find it possible to create logical `German’ environments and insert them into datacentres anywhere in the world, real pressure for the reduction in restrictive legislation is likely to follow. The obvious business advantages in terms of cost and operational efficiency, plus greater business flexibility and agility, will add to this pressure.

The IT Department is dead – long live the Service Aggregation Department

The changing job roles and functions of the IT Department have been speculated upon for a while now, but 2013 is likely to be the year when real changes to the roles and organisation start to be made.

This will happen if only because many enterprises will start dabbling with private cloud structures, which will oblige IT departments to start working at provisioning services rather than run and maintain applications. It may not take too long before opportunities emerge to provision services from other resources than internal applications development teams.

Then they will start moving to become aggregators of services, finding appropriate applications and resources, negotiating with those service and resource providers, and using their technical skills to ensure the integration into the services required.

Policy, Monitoring, Automation and Autonomics

2013 will be the year when policy-based automated management services becomes a major component part of all forms of cloud delivery, for without it being extensively applied across the board cloud services will never fulfill more than a fraction of its potential.

To automate service management will, of course, mean continued development of many different types of monitoring tools. Every aspect of cloud service activity will need to be monitored, with the results matched against user-defined operational and performance policies. This will apply across the board, from monitoring and managing operations and access from a security point of view (for example the classic `is that individual allowed to do that, with that data, at that time, from that location?’) through to billing (`that process has been run this many times using that much from the resource pool’).

The policies will also manage the automated responses to monitor inputs to ensure prompt control over all processes in real time. So, in the security example above, if the answer to any of those questions is `no’, the process is instantly stopped and the data defended.

We are also likely to growth in the notion of component self-management as the monitoring and automation technologies develop.

Lastly, there is likely to be growth in the use of autonomics, the software technologies that allow systems and services to learn how to manage themselves. In this way, a process can respond to previously unknown inputs or operations by comparing them with known inputs or operations in order to find the closest match. It can then start adjusting parameters – modifying itself until an appropriate result is achieved. It can then even let other systems know the result if that is appropriate.

This piece was originally posted in Business Cloud 9 on 18-12-12

Posted in Business.


The Role of Everyone in Everyone Else’s Reputation Management

What follows is an extract of a whitepaper written for a client, NetPrecept Ltd, on how Cloud Service Providers (CSPs) can guard against being the `fall guy’ when cloud services damage a company’s reputation.

But there are wider issues in play here that affect more businesses than just CSPs. Every company looking to play a part in the cloud delivery chain has to start by being aware that they are in fact a part of a delivery chain, and that they play an integral part in managing and ensuring the reputation of both their customers and business partners.

The cloud is not about technology any more, not directly at least. Instead it is about the impact – for good or ill – that the technology can bring to the collective reputation of a complex delivery chain. That reputation impact then flows out to each individual component in that chain. When it works it can be one of the most satisfying moments in a customer’s day – they have located, selected, ordered, paid for and got a confirmation email on a product they want or need, all in a matter of a few minutes.

When it does not work, the technical staff will be able to point – eventually at least – at the probable technical failing that caused the problem. They will be able to identify, for example, the rogue process that was either unwittingly designed into the system, or maliciously inserted in some way. But while that knowledge will be useful in resolving and avoiding the problem in future, the damage will be done.

And the damage done will be to the reputation of all elements of the service chain. The end users will blame the retailer or other `brand leader’ of the product or service. They in turn will blame the service providers they have contracted with. And it will be the CSPs that will often be seen to have allowed their customers’ reputations to be damaged.

In most cases the lynch pin of this chain is the CSP. These companies are usually providing the services for both halves of the service chain. The product owners, the website designers, and the service aggregators will, if they are using cloud services, be hosted by CSPs. And they will be assumed to be the ones with the depth of technical expertise to manage all aspects of the service delivery process, from soup to nuts. That includes the ability to manage service delivery problems – where `manage’ means `stop all negative impacts on the service’. The end users of the service chain – the consumers – will not stand for failures and are likely to be unforgiving.

They will tolerate a few seconds delay, maybe a couple of minutes. But if a service failure is longer than the time needed to make a cup of coffee, the impact will certainly be negative and they are highly likely to curtail the transaction and look elsewhere.

Why is reputation so important in the era of the cloud?

Business and/or brand reputations can be damaged very quickly in the cloud because it significantly shortens the distance between the consumer and the vendor. But not just the vendor: the designers, manufacturers and the supply chain can all suffer equally. That supply chain will also include the supplier of the information – ie the CSP.

The cloud shortens the distance between consumers and other consumers. Social media creates a channel through which they can share information with each other, and some of that information will be about products or services that do not meet expectations in some way. In fact, before social media, consumers were largely enquiring and purchasing in isolation, with the only 3rd party input being comparison of experience with family, or friends `at the pub’.

Now they can put up a message on Facebook or Twitter, or search for a forum/community website that covers an appropriate subject or specific product, and potentially have a response/support/advice etc from tens to hundreds of other consumers within the hour. Such processes can kick-off firestorms of criticism (or praise, depending….). Those firestorms can in turn be picked up by an observant press and become national issues. And this can all happen very quickly.

How technology affects reputation

It can affect for good or for ill. The fundamental issue is that the consumer and vendor are put into what appears to be direct contact. In practice, of course, there are intermediaries involved in this process as well, such as the CSPs and the providers of the software tools being used to manage the process, plus the network service carriers and the rest of the players in the cloud service delivery chain. But it is the directness of connection which is the issue, coupled with the speed at which processes happen.

Positive Effects

There are many ways in which cloud-based services can add value to all the businesses in a cloud delivery chain. This ranges across all aspects of the chain, from the primary brand holder – which could be any product or service from washing up powder, through civil engineering contractor to a  rock band – through the website design company, the banks and financial services companies, the cloud service providers and the online retailing outlets.

When the service works and all elements dovetail together as designed and planned, the speed, comprehensiveness, granularity, and security of the services provided, plus the most important component, the convenience and effectiveness provided to the end customer, all serve to satisfy the needs of consumers of all types.

And satisfied consumers bring with them two distinct business benefits – one, they tend to return to the service that met their needs last time, and two, they tend to tell others – be they family, friends, colleagues, or business partners – about the service providers that have impressed them. And increasing they also voice their opinions through social media.

In other words, the brand of not only the product or service bought will be enhanced, but also that of every step within the service delivery chain. And for the CSPs, a happy service delivery chain is the essence of their brand. The service, for both halves of the chain, works.

Negative Effects

It is when the service delivery chain breaks down in some way that the potential for damage to reputation, and consequential damage to brands, are most likely to occur. And with cloud-based delivery methods the rate of service breakdown can be both rapid and widespread: it is quite possible for one problem in one part of the delivery chain to set off a chain-reaction, not only through the length of the service delivery chain itself but also within other service delivery chains with no direct contact except sharing a CSP service.

This is particularly likely to be the case if a problem occurs with a CSP in the chain, as they will usually be at the centre of propagating and amplifying the results of any problem that might occur. Problems within a CSP, such as unforeseen mismanagement of resources, failure to manage demand or problems trapping any form of malicious attack, are likely to affect more than one customer or one service delivery chain. In the worst cases – and there have been many already, including some of the biggest brands in the CSP business – it can impact the entire service, for all the CSP’s customers are brought to a halt.

What is worse, perhaps, is when the service continues to run, but slows because of resource management issues. It could even be the case that the CSP does not notice the problem – if service level targets are not properly set the service can be said to be `working’, after all.

This is just once version of the `knock-on’ effects which can proliferate within a CSP once a problem has started a chain reaction. It also means that the CSP can find that just one fundamental problem can be the cause of damage to the reputations of other customers.

And to round it up

What this is saying, at the end of the day, is that the vast majority of traditional IT vendors have lived for many years in fertile, well-tended siloes that provided their every need. And as the `whole’ of IT got bigger, so the number of siloes has grown and the degrees of niche specialisation have narrowed. Now, with the cloud, that model is threatened, and most likely is about to be mortally wounded.

The cloud turns everything volte face. Once providing increasingly niche technology was important, now it is irrelevant unless it serves a wider purpose – and that purpose, as Ramses Gallego (security evangelist at Quest Software and VP at ISACA) said in a briefing (http://www.businesscloud9.com/content/data-must-ask-who-are-you-and-what-are-you-doing-me/11434) with me recently, is to take the concept of `security’, and turn it into risk management for the entire enterprise. And that means more than just technology, as well as a very different approach to exploiting technology itself.

(And don’t worry if you haven’t got time to look now, I will be re-running that piece here soon).

This is no longer about technology per se, and even less about security technology. It is about the IT service providers of all types accepting and understanding that they are now a core contributor to a much wider issue than providing technology. They are the base on which the reputations and brand values of many business stand.

Yes, some of the bigger mega-corporates in IT will say they understand this, but they come from a time when they had several months in which to plan out a recovery process for customers, or at least work out a damned good excuse. With the cloud it can all go belly up faster than it takes to read this sentence.

Posted in Business.


Getting back to it

It’s been a while since something last appeared here, and there is a reason. But I realise it is time I got back to posting blogs here, so the flow will start again, very soon.

 

And the reason nothing appeared for a while? Well for some time I have been contributing regularly for Business Cloud 9 and towards the end of last year the decision was made to launch a companion, Tech Cloud 9  that fitted squarely with my interests in the cloud and how it develops. So I have been writing for that much of my time, as well as for Business Cloud 9.

 

But that meant writing for this blog needed to take a back seat, which soon enough became a cupboard with the door shut.

 

However, there are things that I want to say and points which I feel are relevant, which don’t necessarily fit neatly into either BC9 or TC9. So I now have an arrangement where I can re-publish some of that content, and extend and fill it out as I consider fit.

 

So after a slightly less brief hiatus than I had planned for, Banks Statement will return to active duty, as of tomorrow.

Posted in Cloud Development, Cloud Technology, Uncategorized.

Tagged with , .


Collective capitalism is at the heart of the cloud, and will be what users expect

To many people concerned with `mission critical’ systems, cloud computing is of only passing interest, not least because it is still not considered suitable or industrial strength for the job. But if you interpret `mission critical system’ as the existence of the whole business, rather than any specific applications that might help the business to survive, then the cloud has a great deal to teach.

 Perhaps its greatest lesson is how to exploit Collective Capitalism. As cloud services are increasingly complex and powerful amalgams of different applications, services, components and tools that, only together, can create a workable, flexible and agile service solution to a business problem, so all businesses – including both `traditional’ IT vendors and their customers –  are butting up against the requirement to become the same.

 As a long-time observer of the IT industry I have become aware of a core piece of psychological dogma amongst many of its major players – the fundamental belief that their application (and in some obvious cases their hardware platform as well) is the critical component which keeps the mission of the business afloat. In most cases, of course, it no doubt played a significant part in keeping the admin of the business running – and in some cases only when the business bent its admin processes to fit what the application could provide – but those days are now fast disappearing.

 So what constitutes `mission critical’ has to become the subject of some debate. I do feel that those vendors that continue to insist that their product is at the core of mission criticality – even in the cloud – are reaching the point of doing their existing users and future customers no favours at all. For a start they are putting the cart (their `solution’) before the horse (what the user actually needs to achieve to maintain the business in good health in an often rapidly changing marketplace).

 Cloud service providers are already well aware that they exist in a `Scrabble’ world, where having the right letter to make a word marks the difference between success and failure. It is a world where two men and a dog skilled at `X’ can play a small but vital role in creating `an extremely wonderful solution’. They can be as mission critical as the biggest applications in the world.

 And the important point is that the collective approach will not just be what makes the cloud work, it will be the way that user businesses will start looking at their business solutions. They may not see it consciously that way, but they will be looking for services that start and end where they understand the process starts and ends for their business, producing the results they want, in form they want, in the timescale they need and at a cost that makes sense.

They won’t be looking for a `database’ or an `ERP system’, and it won’t come from just one supplier – though in the cloud it might just come from one service aggregator with the right brand name.

Posted in Business.


It’s just a new IT Delivery Model, stooopid.

I was at a Verizon event not so long ago – the company was marking the official opening of a brand new cloud services datacentre for its recent acquisition, Terremark. During a short panel discussion/Q&A session a Verizon exec, Christopher Kimm, made a small observation that is, I feel, of some importance, particularly as he is the company’s VP of Network Field Operations for EMEA and Asia-Pac territories.

“Companies selling technology to business will be in trouble. Selling what the technology does in business value terms is now what is important. And a growing number of users now don’t buy applications as individual pieces. That is now the most expensive option because of the integration costs. The smart ones let other people do that.”

This seems important to me, and not just because it was said by an avowed technologist. To me it sums up exactly the disconnect that is starting to grow between the technology vendors – and especially the large and established ones dedicated to the many mantras of technology – and what users are increasingly now seeking……service.

As the Group President of  Terremark Worldwide, Kerry Bailey, observed, the word cloud has become a meaningless marketing term that in practice no longer says anything of any value. He now refers to the `new IT delivery model’(no doubt with the option that the word `new’ gets dropped in fairly short order).

What is delivered, of course, is a service, and what constitutes that service is entirely at the discretion of the users, and is geared entirely to the needs of their business and how they perceive those needs should be met. Looked at from that end of the telescope, the technology is decidedly secondary.

The new IT delivery model therefore decouples delivered services from the technology stacks on which they run. Yes, the technology is still necessary, but these days a service can be constructed from several different applications and tools, it may well run on different technology stacks, so technology `mix and match’ is now the order of the day. Technology in its own right is no longer the God of Gods.

The trouble is, however, that many of the established technology players, particularly in the software sector, seem unable to cope with being the new bit-part players the cloud now makes them, rather than centre stage idols. It has become a little pastime of mine to track their collective attitude to the cloud, an attitude always driven by their collective fear of losing their hold on customers.

They started by saying that while the cloud was OK for `novelties’ like Google searching, it was of no value to real business (and the customers should stick with us). Next it was acceptance that the cloud was not going away and had a role to play in business, but adding the huge caveat that it was all fearsomely complicated for businesses to understand, so it would be best if they let the technology companies handle it for them (and keep giving them the money).

Things have got a bit more complicated now, because the new IT delivery model is starting to gain real traction, even amongst the not-very-early-at-all adopters. It is even starting to sub-divide as marketing suits try different ways of selling the concepts behind it.

There are, for example, private and public clouds, which for most users are actually just different management strategies applied to the same infrastructure. Then along comes the hybrid cloud.

Now, I notice, a growing number of the established IT vendors are starting to promote `hybrid cloud’ as an identifiable, differentiated product category. So, is it?

The short answer is no, it is just an alternative management approach for using the same infrastructure. By and large they are trying the same old marketing model of trying to sell hot air based on the importance of the technology. The hybrid cloud is just an operational concept – building a service, or range of services where some elements are considered better delivered via a private `cloud’ service while other elements are run on – or sourced from – one or more public services that are available to all.

And just to confuse things, the private services can be running on anything from a public cloud service provider such as Google, Amazon or Microsoft, through to an on-premise legacy application that has been adapted to integrate with a cloud environment.

So, while some of the traditional vendors are now seeking to persuade users that it is sensible to ask for `4-tons of hybrid cloud, please’ it does not exist as a definable, measureable, packaged entity. One day, one hopes, the traditional vendors will get their collective heads round what the new IT delivery model is actually all about. Whether they can cope with the fact that it is not about technology is, of course, as yet unanswered.

Posted in Business.


The `virtual bonded country’

During a recent conversation with Adrienne Hall, General Manager of Microsoft’s Trustworthy Computing operation, she told an interesting anecdote about the Japanese earthquake and tsunami that was intended – successfully, I might add – to demonstrate the effectiveness of the company’s Trustworthy Computing capabilities, and in addition show the power of the cloud as a deliverer of salvation.

 It is, however, a source of salvation that seems to run smack into one area of legislation and regulation that could yet be one of the cloud’s major stumbling blocks – governance issues over where data is stored and processed.

This in turn prompted an idea that could combine one of the older tools of conducting international business and the full capabilities of cloud technologies.

First, however, a little recap on the story. Microsoft’s Japanese datacentre is located a goodly distance from the epicentre of the earthquake, and beyond the range of the devastating tsunami that followed. It survived the initial earthquake, but soon started to suffer with the aftershocks.

The decision was taken to temporarily move the contents of the datacentre to a west coast USA location, a task with Hall’s team achieved both quickly and cleanly. So as an example of disaster recovery/prevention and business continuity capabilities, it is certainly one of the best.

Yet by moving not only its own services but customer services as well to the US west coast it obviously had the potential to put some businesses at legal risk. Any business with a legal or compliance requirement to have data stored – and possibly processed as well – in a specified geographic location could find themselves in a double bind. They either demand that their service is not moved to a safer location – and risk having the business buried – literally – or they do allow the movement, and risk finding themselves in court.

Yet there ought to be a way of achieving the ability to move the physical location of data, especially in the face of a natural disaster such as occurred in Japan, while maintaining the integrity of the storage and processes associated with that data as though it had not moved.

Why not, then, something equivalent to the Bonded Warehouses commonly used by any business importing products that are subject to taxation? With these, those products can be here physically while not being here at all, legally.

With the cloud it should be not be beyond the wit of man to create a solution where a partitioned, isolated and highly secure corner of country `X’ can be inserted into a datacentre located in country `Y’. In that way, a business headquartered in country `X’ could establish a new branch office in country `Y’ and have the local data stored, managed and processed in that local country, despite facing the rigours of compliance and governance legislation which says otherwise.

Given the fact that the vast majority of datacentres run commoditised hardware and system software, regardless of where they are in the world, all that would be needed is the specific security and applications environments to be installed to have a virtual `anywhere’ located `anywhere else’. Add in, as part of the package, sufficient process policy management, monitoring tools and operational auditing and it should be possible to create an environment with enough belt-and-braces security and management controls to satisfy most lawyers.

The icing on the cake could be that the regulatory authorities of country `X’ could then validate the virtual environment and, once approved, it could be installed anywhere – or at least in a subset of specified and approved geographic locations or even service providers.

Let’s face it, just because the data is stored on-premise in the building specified by law, it doesn’t necessarily mean it is secure, or safe from the temptations of `da management’ to take the opportunity to `refine’ some of the data. So the virtual bonded country approach might well be a more visibly secure alternative.

 It would also be a really good service for many of the service providers to offer. Indeed, it would seem to be ready-made for the biggest, `globally’ based providers – or at least those with serious global pretensions.

Posted in Business, Services.

Tagged with , , .


`Rumsfeld’ services and Collective Capitalism

I wrote a piece last week in Business Cloud 9 (http://www.businesscloud9.com/content/ibm-and-collective-capitalism/5770) about IBM, cloud-delivered services, and collective capitalism. It raises, I feel, an important sense of direction in the way service vendors need to configure themselves if they are to meet users’ needs.

And half the problem here is that very phrase, `users’ needs’. I remain convinced that are large number of the traditional vendors of IT and services feel that their established business model – they come up with new gizmos, and new packaging for old gizmos, and the users will try to work out how to get the best they can out of it – can continue to work in the cloud. But the users’ now need something very different.

Indeed, it is fair to say that the majority of users don’t yet realise quite what it means for them to be operating in a service- delivery model. For example, even the now old adage about utility computing – it will mean having data and processing on tap just like mains electricity – in fact misses the point. What do the end users really want? They don’t actually want electricity on tap. They want, for example, entertainment, clean clothes, or hot food.

In each case, reliable electricity supply is the essential underpinning (and OK, with food especially it can equally be gas, but walk with me through the analogy). But the service they require  is not even the TV, or the washing machine, or the oven; it is the ability to get from point A – bored stiff, in dirty clothes and eating a bit of cold lettuce – to point B where they are eating a properly cooked steak, while wearing a clean shirt and watching their favourite film.

This is also an excellent example of why services – in their broadest sense – are also driven by the need for collective capitalism. And this is true even when the members of the collective don’t even know that they are members. This little scenario of a user `service’ (let’s call it `A Good Evening In’) involves not just the provision of electricity but also the manufacture of a TV and broadcast medium, a farmer growing beef cattle and a good butcher, the manufacture of a washing machine (plus the essential washing powder…..oh, and a water supply system as well), several clothing manufacturers, and the producers of excellent cinematic entertainment.

That, you might say, is all blindingly obvious. But now map it onto the way that the majority of the IT vendors go about their business. “We make the best electricity you can buy” is still the extent of vision for the majority of them. What you do with it, is entirely down to you.

So the IBM example of collective capitalism is a rather neat example of a vendor discovering just what is being sold, and a bunch of users discovering what constitutes the actual service they require.

The scenario is fairly straight forward. There is a large construction project, larger and more complex than any one civil engineering company can take on. A consortium of companies comes together to combine their expertise and resources.

So far, so straight forward.

But consortia like this always sink into a morass of political in-fighting over the essential management infrastructures, such as which company gets to host the IT resources, how much the others pay for this privilege, how do they engineer the necessary integration between different IT infrastructures and, probably most difficult, which company owns the data and IP afterwards.

Here the cloud has an obvious advantage. It can be such a clear patch of `neutral’ territory where all of them can pitch camp equally; where the resources can be specified in response to project requirements rather than the resources the lead partner has available can devote to it, and costs can be shared. Data can be stored neutrally and, if necessary, completely destroyed at the end of a project. Also, any arguments about IP ownership could be resolved by having an audit trail of which company uploaded what to the common pool.

And that last element gives a hint of where this then heads. The `service’ required obviously starts to involve more than just the provision of IT flexible, scalable resources. What then if the service provider, which is itself an amalgam of individual tools, utilities and applications from a broad spread of vendors (as well as its own resources and contributions) has the expertise to predict and provision the `Rumsfeld’ services – the ones the users don’t yet know they should know they need.

IBM, like several others, does have an important component in the cloud mix here: an extensive experience in running large, complex projects. From this can be drawn expertise that can be packaged up and offered as services to those businesses that don’t know they need them.

And to extend the Rumsfeld model to the full, it can also be of great benefit where smaller, less experienced businesses don’t know that they don’t know this stuff and are consequently scared of getting into partnerships with the big players for fear of being stripped of their IP etc.

Yet users are going to need the right mix of services, from a wide range of different providers, available to them as a unified collective, which then has the entirely honourable aim of turning an honest penny or two from the transaction. From that the users can then build their own collectives to deliver to their customers the projects or services they have contracted for.

In a world of increasing collective capitalism, therefore, cloud infrastructures can provide the neutral territory where the partners in the collective can operate clearly and openly, with scalable resources and good cost management on tap. But they will also need access to the resources of expertise and tools that help them through the `Rumsfeld Quagmire’, getting permanently sucked down into solving operational and management questions they didn’t know they didn’t know, and probably won’t learn to understand without a good deal of pain.

Posted in Business.


Cloud – a time for practicality

Cloud is not new technology, just re-packaged old technology – that’s why it is commodity, because it works and does a good job. That is why the `package’ is the important bit – it is about what constitutes a service, and the way it is both delivered to end users and how they perceive it and consume it.

So the most important development for this year has little or nothing to do with what any IT industry watcher would consider remotely sexy. It is about making the cloud practical.

But then again, the cloud has very little of great technological excitement  – even Apple’s iCloud is in essence a reworking of what has gone before, usually labelled `Google’, but with the increasingly iconic `i’ branding. All the important elements of the cloud depend upon one thing, the use of technology standards, which is the biggest stumbling block to the IT industry’s most favoured mantra – innovation and change (often, it has to be said, primarily for the benefit of the on-going revenue stream).

But it is the standardisation and commoditisation of the technology which has allowed the cloud to emerge and offer users the greatest service – the emergence of an ever greater diversity and granularity of service offerings to suit their business requirements and budgets.

Most user businesses understand where they currently stand in terms of their knowledge and experience in using IT. Most now understand the messages associated with the cloud and how wonderful it can all be. Nearly all of them, however, still have no great idea, and often considerable trepidation, about the process of transition. There are no convenient chrysalis stages into which they can pop, emerging over a weekend break as a fully formed cloud-based business.

So a growing number of vendors are at last cottoning on to the fact that they need to take users through some steps in order to help business users transform themselves. They need to find tools that help them build some of the underlying infrastructure needed without too much thought. And they need to find some service demonstrations, or find areas of commonality with which they are already familiar, to help illuminate that road to understanding where they suddenly utter the fateful words – `ahh, I get it’.

That has to be one of the underlying reasons why IBM is launching a cloud-based Disaster Recovery service at Cloud Computing World Forum forum this week.

DR? isn’t that old stuff? Yes, and it’s not even new as a cloud service. But that does not take away from the fact it is an excellent demonstration of cloud capabilities. DR is traditionally difficult, time-consuming and expensive, which is why many businesses still don’t do it. Putting it out into the cloud as a service makes it much easier to work with and, dammit, attaching it to the `IBM’ brand probably still sounds more reassuring than `Arnold Scroggins Cloud Services’ to the average enterprise.

What is far more important to IBM, of course, is the fact that it has a far richer, deeper and more comprehensive package of services it can offer once a user business has sniffed success with one, albeit silo’d, project.

It is why Microsoft is increasingly attaching its cloud-based CRM and ERP services to its tried tested (and yes, often cursed) Windows user interface. Just about everyone knows how that works, so if it can be used to build and run an ERP system for a reasonably-sized business, the scary old problem of implementing an ERP system (how many noughts after the `1’ would sir like to spend?) just may become another legend of the `old days’.

It even raises the outlandish suggestion that I might understand how to set one up without the aid of complex brain surgery.

It is also why tools that can short circuit setting up cloud services, pushing most of it into the background, are becoming more important. Most enterprises want to end up with a hybrid environment – a mix of public and private cloud services, plus (in most cases) some old, critical applications still run on-premise.

Latency issues are a common cause of the continued need for on-premise services, and they are going to be a major consideration for many users well into the future – unless someone, somewhere does something clever with the laws of physics. There are tools emerging that help reduce latency, such as the new one-box optimisation solution from Blue Coat Systems, but sadly, they can’t make it go away.

But on a wider front it is tools such as those from OnApp – currently targeted at helping the traditional outsource/hosting services providers move into the cloud – which can play an important part here. They are also ideally suited to the need of enterprises – and more specifically the big vendors and service providers that are looked to by enterprises to help and guide them across the divide between the on-premise of today and the cloud of tomorrow. Those enterprises nearly all want to start their cloud transition in a small and controlled manner, building private clouds at first and then, as experience grows, starting to dabble with integrating public services as part of a growing service mix. But they always face the vexed question: `how on earth di I set that up?’ So a packaged toolset could be just what is required.

The switch to a service culture is also changing some fundamentals that are important to building practical clouds. Take security as an example. There is a switch from just defence mechanisms to a more subtle use of complex policy management and monitoring services to identify unauthorised activities. But even that may not be enough.

I did notice that the recent CIA Denial of Service attack prompted at least one company in the monitoring business to suggest it could have spotted the transgression in real time.  My knee jerk response is `good, but not great’.

Stopping such events as they start to happen – traffic management on steroids if you like – is now an important part of every cloud service. And that has to include not just stopping a particular, malicious event but also allowing all the other positive business activities to continue. Something like NetPrecept’s cPEP technology lays claim to just such a capability. It can, for example, filter traffic so that a business can set differing priority access levels to different types of customer or partner – Gold, Silver and Bronze for example. And as part of that capability can also spot, and filter out, a DoS attack while leaving the good stuff to keep flowing. The service level to customers might slow a bit, but it won’t lay down and die.

Not the greatest level of excitement to be had amongst this lot, it is true. But it does represent the real world for business users a great deal more than the latest gizmos from the fevered brains of technologists. And if the cloud is really going to take off, this has to be the year of boring and sensible.

Posted in Business.


Be assured, the cloud is about business

A short blog to start with, to introduce myself and get the ball rolling.

Fundamentally, having been writing about the technologies of IT for over 40 years, I just get the feeling that everything has been a precursor to, and a practice ground for, the cloud: and that the exploitation of cloud-based infrastructures is the basis on which all future businesses will operate.

One of the key drivers behind the cloud is the way in which it switches the balance between technology and business. We still live, of course, in the days of on-premise systems, where the technology and what it can achieve for business users has driven and continues to drive the relationship between the two. But this is fast changing, and technology is no longer king over business and how it operates, but its fully fledged servant.

Historically, businesses have had to fit themselves around what the technology vendors have been able to provide. And in order to meet the growing needs of business users, the technology vendors have had no other recourse than to make their applications ever bigger, ever more complex, if they were to cover the bases business wanted covering. Implementing some of these applications has become a full time job, not only for skilled individuals, but also complete businesses. Some of these applications have generated whole branches of industry in their own right, dedicated to the task of implementing working solutions for business users.

Then, of course, came the issue of integration and collaboration. These are two excellent objectives that fit the needs of business – and consumers even more so. Getting different applications to communicate effectively is only the technological equivalent (and business necessity) of getting different departments in a business to work together. But when they all spoke different `languages’ the problems that followed were huge – file format converters, references to look-up tables and the rest kept getting in the way, and each vendor would insist the problem lay with one of the others. `They can easily integrate with us’, was always their standard battle cry.

All of this, and more, conspired to create an environment where the tech vendors – often on the grounds of maintaining their self-referential perception of their `differentiation’ in the marketplace – determined what businesses could do and, more importantly, the speed with which they could change what they could do.

The coming of the concept of the cloud has changed all that. Standardisation of the protocols of intercommunication between applications and services has made integration an infinitely more simple task (if XML was human, someone would probably make it a saint for, yea verily, it hath wrought many miracles, most of which we all now blithely take for granted)

In turn, business users can now create areas of collaboration undreamed of before; not only between different departments of the same business, but also between different businesses.

Commoditisation of the technology utilised to provide the resources of the cloud has opened up those resources for all (or an ever-increasing amount of all) to exploit. There is still a way to go on this front, of course. Mobile vendors are still committed to playing the death-game of market differentiation, which most will lose. They all want to lock users in their technology, forgetting the fundamental law of that particular game:

Most attempts at user lock-in will fail, and kill the company as a consequence. Those that do succeed will also lock in the vendor and open the technology to all others.

That is why the issues of standards and interoperability are high on my list of areas of study, for they are the bedrock on which everything else is built. And the fun part is going to be observing and identifying the balance between inevitable change (without which we’d still all be running IBM 360 mainframes) and the need for stability and openness.

From that can come, I believe will come, a complete change in the relationship between technology, for so long the effective `master’, and its long-time supplicant, business.

Collective capitalism, the dynamic coming together of co-operating businesses to meet a customer requirement quickly and effectively, will be the order of the day. Indeed, it will become an everyday occurrence, often automated so that even the `brand’ business leading the process (the brand name that the customer identifies with a trust most – or perhaps distrusts least) does not always know all the vendors making a contribution to a project.

And the result that all businesses aim for is better business assurance – not just the assurance that the business can ride out the accidents and vicissitudes of business life, but gain the ultimate assurance of having the ability to transform the business to meet the needs of the marketplace. And that does not mean transforming once from on-premise to `the cloud’ (in fact, that is probably one of the most dangerous mindsets to adopt in terms of business assurance), but continually transforming the business, using the flexibility, scalability and economic advantages of the cloud to create the business agility needed to hit the market’s needs, when they need them.

Posted in Business.

Tagged with , .




css.php