Today’s enterprise is a federation of companies with vast collections of dynamic services that are enabled/disabled frequently with ever-changing sets of authentication and access control. To survive in this environment, a modern enterprise needs to develop an intimate yet secure ecosystem of partners, suppliers and customers. So unlike the rudimentary connectivity case, the typical production application is composed of many dozens and perhaps hundreds of services, some internal to an enterprise and some residing in a collection of external cloud infrastructures or data centers. For example, the incredibly successful Amazon ecommerce website performs 100-150 internal service calls just to get data to build a personalized web experience.
Scrum teams deliver a working and tested result every Sprint. So they don't just deliver the end result after a year, but a small extra step every month or less. But remember, they only deliver results that are truly finished! This expands your ability to steer so greatly that using a process of control loses much of its importance. Therefore, this can, for the most part, be done by the Scrum teams themselves. At the same time, it's essential that teams continuously improve. This is the critically important role that the manager plays: Helping the team improve by removing obstacles for them. You help create an environment that the Scrum team can work in. This means you should hold yourself back from intervening too much.
A trap many well-meaning but less experienced navigators fall into fairly often is to offer up advice as soon as that happens. Good navigators know when to wait a little bit before pointing out a missing semicolon somewhere, and will do it when there’s a natural pause in the driving. A very large number of interruptions rising from unfamiliarity of the driver might be a good indication that it’s time to swap roles, even if for a very short amount of time. For all the more interesting and more abstract issues, though, an experienced navigator is good at communicating intent – the what, not the how, and uses inclusive language (“us” and “we”, rather than “I” or “you”) as much as possible while at it, so the driver is invited to revisit some of the motivations behind intents they might not necessarily agree on.
Again, the majority of organisations cannot say for sure that adopting cloud services will result in savings because they don't know how much those services cost to run in-house. At best, the organisation might know the total cost of IT infrastructure, software and skills, and be able to roughly split that between the services provided. The decision to move to the cloud will therefore be based on this estimate; yet this ignores the fact both that the real costs are far more complex and that, by moving services to the cloud, you are not removing costs, but changing them. For instance, an organisation with its CRM service presently hosted in a data centre may decide to move CRM to the cloud.
While there isn’t much difference in how you develop the application itself, supporting IoT devices does require that software engineers become proficient with device-level application programming interfaces (APIs). IoT integration is all about APIs, the logical connectors that allow applications to communicate with each manufacturer’s IoT devices. APIs expose data that enables those devices to transmit data to your applications, acting as a data interface. Or, they can allow your application to control the device and serve as a function interface. While device manufacturers are taking steps to ensure that their APIs are well defined, developers must learn how to use IoT device interfaces effectively. Fortunately, third-party providers are also producing tools that make using each IoT device manufacturer’s APIs easier for developers.
Ongoing innovation and continuous feature updates are hallmarks of the platform business model. In The Cookie Dining case, the platform is expanding on a number of fronts. A feedback manager, which will let customers rate their food and delivery experience, is scheduled for release by the end of September. Integration with Yelp, which posts customer reviews of restaurants and other businesses, is also slated for September. Cookie is also at work on a point-of-sale (POS) system for in-store sales, Manojlovic said. Cookie POS v1.0 should be available for beta testing in December, he noted, adding that the idea is to unify "the whole sales experience for the restaurant."
The issue is that many makers of "things" still apply a traditional "box" mentality to products and do not consider the extra revenue opportunities of licensing-controlled embedded software and applications. Most of these companies are first-time software providers, mainly device manufacturers and OEMs that can now monetise their software as well as the devices via the IoT. For these companies, the IoT represents a significant market opportunity. “By monetising the software on their devices, these vendors will be able to increase and drive recurring revenue streams, creating billions of dollars of additional value,” Wurster adds. ... For the foreseeable future, Wurster believes the IoT will drive business transformation for many device manufacturers, enabling them to use software on the device to differentiate product and solution offerings.
The point about data algebra is that it genuinely represents data in a software compatible manner – any data. There is a back story to why this algebra was created. It was not a small effort, and it was years in gestation. In fact, Algebraix Data Corporation, founded by software engineers who believed a mathematical approach to data was possible, spent over six years creating, enriching and proving data algebra’s applicability. This was an extensive research activity that primarily involved using data algebra directly in a variety of data management activities: defining data, organizing data, querying data and optimizing the queries for performance. This was the focus of the research partly because it was decided that the best area to prove data algebra was in using it to manipulate and transform data in applications that did little else: database optimizers for data in both tables and graphs.
When the first system of record’s data meets the organization’s quality standards for that data type, the organization should build a real-time data quality firewall around it. With a data quality firewall, no matter where the data is coming from (online customers, a merchant, etc.), the firewall intercepts the data, cleanses it, and only then allows the data to enter the system of record. ... Profiling is both a technical challenge and management challenge. Questions will remain: How much more money should we dedicate to cleansing data? When is it clean enough? What return on investment do we need to make this particular cleansing process worthwhile? Again, these are strategic questions for the organization to evaluate as they weigh the importance of data sets.
"We have always expected business continuity and disaster recovery considerations to be incorporated in an institution's business model," the report states. "However, in addition to preparing for natural disasters and other physical threats, continuity now also means preserving access to customer data and the integrity and security of that data in the face of cyber-attacks." That's why FDIC says it "encourages banks to practice responses to cyber-risk as part of their regular disaster-planning and business-continuity exercises." The FDIC suggests that community bank directors use the cyber challenge program to openly discuss operational risks with their peers and employees and review the potential impact of cyber-attacks and other technology disruptions on their customers and operations.
Quote for the day: "Reduce the layers of management.They put distance between the top of an organization and the customers." -- Donald Rumsfeld