Containers allow applications to be abstracted from the underlying infrastructure on which they run. They give developers a way to package applications into smaller chunks that can run on different servers, thereby making them easier to deploy, maintain, and update. But securing containerized applications requires a somewhat different approach compared with securing traditional application environments. That's because they are a bit harder to scan for security vulnerabilities, the images on which they are built are often unverified, and standardization in the space is still evolving. Importantly, containers also can be spun up and down quickly, making them somewhat ephemeral in nature from a security standpoint. "Even though container technology may be a new concept to companies deploying them, the idea behind them should be familiar," says Kirsten Newcomer, senior principal product manager, security at Red Hat. Organizations need to think about security through the application stack both before deploying a container and throughout its life cycle.
Analysts said opening DNA Center to the world is potentially a good move and could help customers more easily build strategic applications, but it will take a big effort make it a successful venture. “DNA Center is Cisco's strategic management platform going forward, and we believe it will consume functionality that is currently distributed across several products. This should help as Cisco customers have cited multiple management tools as an ongoing challenge,” said Andrew Lerner, research vice president with Gartner. “So, this is a move in the right direction, but much work remains. For example, much of the data-center-networking portfolio including ACI and Nexus 9000 switches are not well integrated into DNA Center at this point,” Lerner said. “This announcement is about opening up DNA Center’s capabilities via API to do things such as orchestrating with other vendors and platforms – like Infoblox or ServiceNow. This can add to the value of the DNA Center, platform if third parties and customers use the APIs and/or SDK to develop integrations. However, that potential is largely aspirational at this point, as the depth and breadth of integrations that will be created are undetermined,” Lerner said.
Unfortunately, encryption is currently under attack from not one, but two sources – governments seeking backdoor access to encryption algorithms, and criminals wanting to breach encryption to gain access to sensitive data. Although she later backed down, former UK home secretary Amber Rudd demanded last year that technology companies create backdoors in messaging apps to give the security services access to encrypted communications. More recently, FBI director Christopher Wray renewed his call for backdoors in encryption, exclusively for the use of law enforcement agencies, and US senator Dianne Feinstein is spearheading a campaign for law enforcement to have access to any information sent or stored electronically. “I think there is a naivety about the cyber world and how to secure it,” said Hudson. “People tend to run off and make proclamations, like installing a backdoor is a really good idea.” Governments want encryption to work, but they also want to be able to access encrypted information in order to pursue criminals. However, installing a backdoor in an encryption system would create a fundamental vulnerability in the protection that would inevitably be exploited.
Many GPU-based solutions are based on direct-attached storage (DAS) deployment models, which makes AI's distributed training and inferencing very difficult to do. As a result, staging and management of these deep learning data pipelines can become complex, time-consuming tasks. This bottleneck is being addressed with non-volatile memory express, or NVMe, which was originally designed to provide better connectivity between solid-state drives (SSDs) and traditional enterprise servers. Now, it is being baked into new I/O fabrics to improve AI workloads. The thinking is that NVMe over Fabrics (NVMeF), as these interfaces are called, will help reduce the overhead in converting between network protocols and in managing the idiosyncrasies of each type of SSD. This could allow CIOs to justify the cost of AI apps that use larger data sets. There are risks with NVMeF, starting with the high cost of investing in the bleeding edge. Plus, the industry has not settled on a vendor-neutral approach to NVMeF yet, which means CIOs also need to be wary of vendor lock-in as they choose a product.
Gartner predicts that by 2020, more enterprises will use CASBs than not, which represents a big jump from the 10 percent that used them at the end of 2017. Several years ago, many enterprises purchased CASBs to stem the tide of what was then called shadow IT and is now considering standard operating procedure in many businesses. IT managers would get a call from their commercial Dropbox sales rep and be told that hundreds of their users were using personal Dropbox accounts, which was often news that they didn’t want to hear. That was the initial sales pitch by the CASB vendors: we can discover where all your cloud data lies and help to protect it. Traditional security tools didn’t provide this visibility, especially when the network traffic never was seen by the corporate data center. “I want to have control over my data, even when it isn’t residing in my own machines,” said Steve Riley of Gartner. The first attempts at using CASBs were eye-opening for many corporate IT managers. When they were first deployed, IT would find ten times the number of cloud services in use than they thought they had estimated, according to Riley. That turned into a big selling point.
The single largest effect we observed involved office politics, which can be a serious problem for data scientists because many feel poorly equipped to handle it. And companies that are building data science teams may struggle to provide the support and direction they need—especially if they’re new to the game. Data scientists in a strife-ridden work environment—compared with one free of infighting, and with all other factors being equal—had job satisfaction that was 1.3 points lower, making it the biggest move we saw in the entire data set. But it would be a mistake to think that allowing them to work remotely would be an effective bandage for a difficult office environment. We discovered that the more people work off-site, the more affected they are by political issues: Remote workers in politicized work environments experienced a job satisfaction decline of 1.5 points, compared with a decline of 1.2 points for employees always in the office. Clearly, a strong corporate culture gives you the flexibility to allow more remote work.
Predictive analytics has been used in a variety of fields to minimize waste, increase effective utilization of resources and help all stakeholders find better parity with organizational goals. It hasn’t been put to the test in the field of innovation to the same extent as fields of finance, public polling, law enforcement and a number of other fields. However, that is primarily because it takes time to develop predictive analytics models and the forecasts are often made years into the future. ... Demographics are major predictors of demand for various products and services. One of the main reasons that companies have difficulty introducing successful products to the market is that they have a hard time forecasting changes in demographics. They often assume that the composition of income, ethnicities, gender, and other factors among the general population will remain static. They are often blindsided when the representation of some groups grows faster than expected, which changes the level of demand for their product. ... Studying other products can be a great way to determine the likelihood of success for a similar one. If a product developed around a particular market failed in the past, there is a good chance that it will fail again, unless there has been a major shift in the market.
Mobile tech, and especially mobile brought into companies through BYOD, has unique challenges for companies that need to comply with General Data Protection Regulations (GDPR) — and that’s virtually all companies, not just the ones in Europe. The regulations compel companies to manage personal data and protect privacy, and they provide individuals to have a say in what and how data about them is used. GDPR has several disclosure and control requirements, such as providing notice of any personally identifiable data collection, notifying of any data breaches, obtaining consent of any person for whom data is being collected, recording what and how data is being used, and providing a right for people whose data is being collected to see, modify, and/or delete any information about them from corporate systems. The problem is many corporate systems now extend into mobile branches that include smartphones and, in some cases, tablets. Analysts at J.Gold Associates, LLC. estimate that in about 35 to 50 percent of cases, these devices are not actually corporate devices, but personal devices being used by employees of the company in their daily work.
What technology has done is break up this glue between value chains as it has made transaction costs low by lowering the cost of communication and information exchange. As the cost of exchanging information goes down, at some time the balance between bottom-up information flow and top-down decision-making goes out of kilter. There is too much information to process for the top to make efficient decisions. To use a system’s term, the throughput of information renders centralized decision-making an inefficient value chain mechanism. The system - in this case a company or an organization - therefore adapts by fragmenting its structure and changing the balance between responsibility and accountability of its agents. A new structure begins to emerge with new job titles, new leadership positions, new jobs, new requirements, and even has cultural impacts. Capitalism after all is as much a cultural artefact as it is an economic one. It is this adaptive phenomenon that I call the Theory of Fragmentation, this continuous breaking of existing rigid structures to create a flatter, more spread out structure that stems from the combinatorial evolution of technology
“A cloud native app is architected specifically to run in the elastic and distributed nature required by modern cloud computing platforms,” says Mike Kavis, a managing director with consulting firm Deloitte. “These apps are loosely coupled, meaning the code is not hard-wired to any of the infrastructure components, so that the app can scale up and down on demand and embrace the concepts of immutable infrastructure. Typically, these architectures are built using microservices, but that is not a mandatory requirement.” For cloud-native applications, the big difference then is really how the application is built, delivered, and operated, says Andi Mann, chief technology advocate at Splunk, a cloud services provider. “Taking advantage of cloud services means using agile and scalable components like containers to deliver discrete and reusable features that integrate in well-described ways, even across technology boundaries like multicloud, which allows delivery teams to rapidly iterate using repeatable automation and orchestration.” Cloud-native app development typically includes devops, agile methodology, microservices, cloud platforms, containers like Kubernetes and Docker, and continuous delivery—in short, every new and modern method of application deployment.
Quote for the day:
"If no good can come from a decision, then no decision should be made." -- Simon Sinek