It's the idea that Hilton International or Marriott would be worrying about Airbnb. They weren’t thinking like that. Or transport companies around the world asking what the impact of Uber is. We've all heard that software is eating the world, but what that basically says is that the threats are real. We used to be in an environment where, if you were a bank, you just looked at your four peer banks and thought that as long as they don’t have too much of an advantage, we're okay. Now they're saying that we're a bank and we're competing with Google and Facebook. Actually, the tolerance for stability is a little bit lower than it was. I had a very interesting conversation with a retailer recently. They had talked about the different goals that organizations have.
The need to manage and protect both business and personal data (as clearly differentiated from the software) has never been more important. A disaster recovery/business continuity plan that does not account for our dependence on data puts the enterprise, its employees and customers at risk. ...
A good disaster recovery/business continuity (DR/BC) plan is not an IT plan, it is a business plan that has significant IT components. As discussed above, more and more focus needs to be placed upon datarecovery beyond ensuring that programs and processes are returned to operational status. The plan should be scenario-based and aligned to the likelihood of varying levels and types of risks as specified by documented business impact analyses and business risk assessments.
There are two fatal flaws in this model, both having to do with managing expectations. First, clients need to understand that they are unlikely to get every deliverable without some compromise – particularly in custom software, where nobody knows exactly what’s involved until the project is more than half done. Second, the project lead on the consultant side must actively manage expectations during every client meeting. If the project lead on the client side is weak – technically or politically – s/he will not successfully propagate the realities of prioritization and negotiation to executives in the client organization. This means the project is in trouble before it starts … and, worse, the trouble can be totally invisible to the client until it’s way too late.
Programming languages and technologies that were developed by industry and Internet giants – specifically to meet the unique challenges they faced operating at massive scale – have been open sourced and are now being adopted by regular-sized enterprises for everyday use. Part of the reason for this is a natural technology trickle-down effect, according to Mark Driver, a research director at Gartner. "Today's leading edge super high tech is tomorrow’s standard product," he says. "Also, large companies (like Google and Facebook) understand the collaborative nature of open computing and the dynamics that drive the Internet. So it's natural that they share these technologies and strengthen the industry around them."
Looking at the list of finalists for the Crunchies, you could get the impression that the greatest advances of 2015 were sharing and delivery apps, software platforms, and pencils. Yes, these are cool. But much bigger things happened last year. A broad range of technologies reached a tipping point, from science projects or objects of convenience for the rich, to inventions that will transform humanity. We haven’t seen anything of this magnitude since the invention of the printing press in the 1400s. And this is just the beginning. Starting in 2016, a wider range of technologies will begin to reach their tipping points. Here are the six amazing transformations we just saw.
This shortage will boil over in the coming years as a generation of IT workers, who built the systems and databases that still power critical functions, begin to retire. This is especially worrying in finance, where large institutions, which have repeatedly merged and sold off parts of their businesses, have back-end systems that have been hastily thrown together. As those that created them leave the workforce, disasters will be more difficult and take longer to recover from. Companies have responded to the problems with hiring IT workers by outsourcing more work. But having done this, says Tate, many have made poor decisions, found contractors to be inadequate, and moved operations back inside. The alternative is simply to pay more for the best talent, but a swell in demand across the board is making this increasingly expensive.
We simplify what has become complicated, we create dashboards of the automation and single pane of glass displays of the coordinators, and we start the cycle over again. It sure seems a little reversed to me. Am I issuing a wake-up call to our industry? Absolutely! I have begun to initiate some brain-storming sessions with colleagues that challenge the status quo. Our technology is now using Fully Automated Storage Tiering, multiple alerting consolidation engines, automatic load balancing, pooled resource rebalancing, and the list goes on and on. This is fantastic and exciting beyond belief to talk about, explore, and work with these technologies. However, I am involved in services. We are the pilots of the automation, and we must aviate, navigate and communicate our way through the technology hierarchy.
As wearable devices make their way into the workplace and corporate networks, they bring a host of security and privacy challenges for IT departments and increase the amount of data that data brokers have to sell about an individual. Jeff Jenkins, chief operating officer and co-founder of APX Labs, talked about the security and privacy of wearables during a panel interview with Tech Pro Research at CES 2015. Because wearable devices are designed to be small and portable, Jenkins said, "you have to make sure you're thinking security first and you're thinking about the information that's being generated by them. You have situations where it's no longer just personal data that may be exposed or compromised, but also potentially operational data, that could be sensitive in nature."
Simply defined, bitemporal data means storing current and historical data, corrected and adjusted data, all together in the same place. Bitemporal means you are using two time dimensions simultaneously – one to represent business versions and one for corrections. For example, let’s say you have a database table of customers; in a bitemporal world, you would store changes (versions) of the customer’s data, over time, as well as any corrections, as new rows in the same table. Customer data changes include attributes like the customer’s name, address or buying preferences. Corrections (some people like to call it adjustments) represent restatements of data that people or systems make to record the right value. Human typing errors or software errors create data that may get corrected.
Most businesses are ill-prepared for DDoS attacks, which is why it costs them so much to recover, Meyerrose says. The cost of recovering from a DDoS attack can be more than $50,000 for small businesses, he notes, quoting data from security firm Kaspersky Labs. That cost includes business lost to downtime and technology expenses and investments associated with site recovery. So what can be done to defend against the growing DDoS threat? "My main strategy for defense would be making sure I could quickly detect and block all types of DDoS attacks, e.g. application or network layer, and be able to quickly redirect my users to a backup duplicate, albeit streamlined, site to keep my business running without interruption," Litan says.
Quote for the day:
"Once we rid ourselves of traditional thinking we can get on with creating the future." -- James Bertrand