Legitimizing the idea of internal customers puts IT in a subservient position, where everyone in IT has to make their colleagues happy, whether doing so makes sense for the business or not, let alone whether it encourages the company’s actual customers to buy more products and services. ... Want to do some damage? Establish formal service level agreements, insist your “internal customers” sign them, and treat these SLAs like contracts. And if you really want IT to fail, argue about whether you’ve satisfied your SLAs every time an “internal customer” (there’s that word again) suggests IT isn’t doing what they need it to do. It’s a great way to keep relationships at arm’s length.
While we’ve had elements of a digital supply chain for quite some time, in this more holistic sense of a digital nervous system, we are only beginning to scratch the surface. A nervous system can take our sensory inputs – sight, sound, touch, taste, and smell – and a person can react either instantly or more thoughtfully to what is happening around them. While a WMS is a digital supply chain application, it has a limited scope in how it is using sensor data. It certainly does not react in the holistic way that a nervous system does. There has been an explosion of new sensor data available to be used to create digital supply chains. We are using, or learning to use, SNEW data – social media, news, event, and weather data
The term "IoT system" generally refers to a set of IoT devices and the middleware infrastructure that manages their networking and interaction. Specific software can be deployed logically above an IoT system to orchestrate system activities to provide both specific services and general-purpose applications (or suites of applications). Providing specific services means enabling stakeholders and users to access and exploit things and direct their sensing or actuating capabilities. This includes coordinated services that access groups of things and coordinate their capabilities. For instance, in a hotel conference room, besides providing access to and control of individual appliances, a coordinated service could, by accessing and directing the lighting system, the light sensors, and the curtains, change the room from a presentation configuration to a discussion configuration.
Today, cybersecurity is high on everyone’s radar, as a powerful new reality that is penetrating all facets of cyberspace. On a near-daily basis we read of damages to hardware, software, content, products, processes.. No one is immune. No one is safe. This new reality — with the variety of threats, exploits and damages that seemingly multiply day by day — creates new markets, new business opportunities, new strategic concerns and threats to our collective views of law and order. These elements are shaping a new normal which is not yet fully understood. But they are clearly anchored in the nature of the hardware, ever changing uses and functions enabled by evolving software and fueled by the power of human ingenuity. When the Internet was designed, threats to security were not central to the basic architecture nor to the core design principles.
The survey also found that massive amounts of time and money are wasted on ineffective endpoint security solutions and lack of endpoint visibility and control is a major issue. Ineffective overall endpoint security protection costs an average of $6 million in detection, response, and wasted time. Only 27% of survey respondents have confidence that their company can identify the endpoint devices which pose the greatest risk in a highly effective fashion. Worse, 20% reported having no endpoint security strategy at all. On average, according to the report, companies spend over 1150 hours on a weekly basis attempting to detect and contain insecure endpoints, which represents a cost of $6 million spent detecting and containing insecure endpoints or suffering unplanned downtime. Nearly half of those hours are spent chasing false positives, which equates to $1.37 million of annual wasted expenditures.
First, hold virtualization implementers to high standards. We have learned a lot in the last few decades about development methodologies that reduce defects and quickly detect and remediate defects that make it through development and into production. When consistently practiced, DevOps, the methodology that removes the traditional boundaries between development, deployment, and production, and embraces continual improvement, has greatly increased system reliability. Hypervisor implementations have fared well. Although potential exploits have been found, the hypervisor developers have also been diligent about fixing problems. This has kept the number of actual malicious exploits low. However, developers make mistakes and diligence is not absolute protection. Some flaws always creep in.
With microservices, your code is broken into independent services that run as separate processes. Output from one service is used as an input to another in an orchestration of independent, communicating services. Microservices is especially useful for businesses that do not have a pre-set idea of the array of devices its applications will support. By being device- and platform-agnostic, microservices enables businesses to develop applications that provide consistent user experiences across a range of platforms, spanning the web, mobile, IoT, wearables and fitness tracker environments. Netflix, PayPal, Amazon, eBay, and Twitter are just a few enterprises currently using microservices.
Every company has to decide where to make its investments. Some BI company might come along and say “we are the best for the Hortonworks distribution of Hadoop”, and that might fly for a while. But I have to say I have been in this business for 27 years and every three years there is a new data technology which is the rage. I remember one that was billed as the world’s fastest database, and I asked one of their sales people what was in the next release, and he said “joins”. That’s a colossal joke because there is no serious problem that you can solve without doing table joins. So, yes, as long as you don’t need to ask the next question or need mathematics or need more than two users to run a query, it’s super-fast and great.
Finding Bounded Context can be done by grouping user stories together. So for example searching for products by full-text search, by categories or by recommendations might be part of the same Bounded Context. Of course the split is not clear-cut - depending on the complexity the search might be split into multiple Bounded Contexts. Also a user journey might provide ideas about a split into SCSs. The customer journey describes the steps a customer takes while interacting with the system e.g. search for products, check-out or registration. Each of these steps could be a candidate for a SCS. Usually these steps have little dependencies. Oftentimes there is a hand-over between these steps: The shopping cart is handed over to the checkout where it becomes an order, and is then handed over to fulfillment.
This enables the team, led by Dr. David Matthews, Senior Lecturer in Virology at the University, to examine how the virus had evolved over the previous year, informing public health policy in key areas such as diagnostic testing, vaccine deployment and experimental treatment options. This complex data analysis process took around 560 days of supercomputer processing time, generating nine thousand billion letters of genetic data before reaching the virus’ 18,000 letters long genetic sequence for all 179 blood samples. This is just one of many examples of how HPC at the University is contributing to significant research projects. Now in its 10th year of using HPC at Bristol, each phase from the first supercomputer through to BC4 has been bigger and better than the last and, in years to come this trend will definitely continue.
Quote for the day:
"Once you've accepted your flaws no one can use them against you." -- George R.R. Martin