Lack of ongoing training and recertification. Such training helps to reduce the number and severity of hybrid cloud misconfigurations. As the leading cause of hybrid cloud breaches today, it’s surprising more CIOs aren’t defending against misconfigurations by paying for their teams to all get certified. Each public cloud platform provider has a thriving sub-industry of partners that automate configuration options and audits. Many can catch incorrect configurations by constantly scanning hybrid cloud configurations for errors and inconsistencies. Automating configuration checking is a start, but a CIO needs a team to keep these optimized scanning and audit tools current while overseeing them for accuracy. Automated checkers aren’t strong at validating unprotected endpoints, for example. Automation efforts often overlook key factors. It is necessary to address inconsistent, often incomplete controls and monitoring across legacy IT systems. That is accompanied by inconsistency in monitoring and securing public, private, and community cloud platforms. Lack of clarity on who owns what part of a multicloud configuration continues because IT and the line of the business debate who will pay for it.
Although it is common to have the cyber risk oversight function fall to the audit committee, this should be carefully considered given the burden on audit committees. An alternative to consider, depending on the magnitude of the oversight responsibility, is the formation of a dedicated, cyber-specific board-level committee or sub-committee. At the same time, because cybersecurity considerations increasingly affect all operational decisions, they should be a recurring agenda item for full board meetings. Companies that already have standalone risk or technology committees should also consider where and how to situate cybersecurity oversight. The appointment of directors with experience in technology should be evaluated alongside board tutorials and ongoing director education on these matters. Robust management-level systems and reporting structures support effective board-level oversight, and enterprise-wide cybersecurity programs should be re-assessed periodically, including to ensure they flow through to individual business units and legacy assets as well as newly acquired or developed businesses.
This is not just a problem, of course, with open-source software. With open-source software, you can actually see the code so it's easier to make an SBOM. Proprietary programs, like the recently, massively exploited Microsoft Exchange disaster, are black boxes. There's no way to really know what's in Apple or Microsoft software. Indeed, the biggest supply-chain security disaster so far, the Solarwinds catastrophic failure to secure its software supply chain, was because of proprietary software chain failures. Besides SPDX, the Linux Foundation recently announced a new open-source software signing service: The sigstore project. Sigstore seeks to improve software supply chain security by enabling the easy adoption of cryptographic software signing backed by transparency log technologies. Developers are empowered to securely sign software artifacts such as release files, container images, and binaries. These signing records are then kept in a tamper-proof public log. This service will be free for all developers and software providers to use. The sigstore code and operation tooling that will make this work is still being developed.
WAFs are specific to each application and, therefore, require different protections. The filtering, monitoring, and policy enforcement (such as blocking malicious traffic) provide valuable protections but carry cost implications and consume computing resources. In a DevOps-fed cloud environment, it’s challenging to keep WAFs current with the constant flow of updates and changes. Introducing security into the CI/CD pipeline can solve that problem, but only for those apps being developed that way. It’s impossible to build security sprints into old third-party apps or applications deployed by different departments. The mere existence of those apps presents risk to the enterprise. They still need to be secured, and WAFs are likely still the best option. It’s also important to remember that no approach to cybersecurity will be perfect and that an agile DevOps methodology won’t be enough on its own. Even in an environment believed to be devoid of outdated or third-party apps, you can never be sure what other groups are doing or deploying—shadow IT is a persistent problem for enterprises.
The Brain Machine interface is a study that captures this neural process to control external software and hardware. Though the technology is at its primary stages, these are the current possibilities of the Brain-Machine Interface. Brain-controlled wheelchair: A technique to ease the life of disabled people. With concentration, users will be able to navigate the wheelchair through familiar environments indoor. Brain-controlled Robotic ARM: A brainwave sensor is used to capture brain signals every time the user blinks, concentrates, meditates to put to use. The Robotic Arm is moved with an EEG sensor based on the brain data collected. Brain Keyboard: Oftentimes, paralyzed people fail to communicate with the surrounding environment. But that can be solved with a Brain Keyboard. EEG sensors will read the eye blink and the system will translate the text on display. Brain-controlled Helicopter: Can you imagine flying a helicopter with your brain? It’s possible. The helicopter can fly according to the pilot’s concentration and meditation, which will navigate the helicopter up and down. Brain-controlled password authentication: EEG can be applied in biometric identification as brain signals and patterns are unique for every individual.
Security and compliance are the biggest barriers to adoption, respectively. However, for the majority of business leaders, the cloud is more secure and easier to maintain compliance than on-premises. Only a tiny minority of decision-makers find that the public cloud is less capable in terms of both security and data compliance than on-premises. Although superior in terms of capability, switching to cloud-native security and compliance models is a struggle for some enterprises. However, almost everyone is planning on growing their cloud program… despite the concerns some have expressed about vendor lock-in. The vast majority of enterprises will continue on with their cloud journey, although around a third are predicted to go full-steam ahead, migrating “as quickly as is feasible”. This is by no means the case for all enterprises, though. Around half wish to migrate more cautiously. Vendor lock-in appears to also be a major issue for most enterprises. The majority of enterprises express that they are significantly concerned by the consequences of putting their all eggs in one cloud provider’s basket. Only a fearless few do not see this as a concern, and this is the way to go.
In the next 3-5 years, the digital insurance consumer will likely remain the millennials, with higher levels of income and education. It is important though to not assume homogeneity and develop solutions based on lazily assessed group characteristics. Personalisation is more important now than it ever has been. Beyond functionality and ease of access, emotions and personal growth are key drivers in consumption behaviour and like in any other group, there are a diverse set of expectations and desires amongst this group. Tailoring services and online buying journeys to the individual rather than the group is paramount; in the same way that offering life insurance immediately following a bereavement could be viewed as inappropriate, so too an offer of a social insurance be offensive to a staunch individualist. Certain benefits, although appealing on the surface to members of 'the group' may not work at a more nuanced level – a donation with every policy bought to an environmental charity will not appeal to every millennial.
Although Phil Reitinger, a former director of the National Cyber Security Center within the Department of Homeland Security, doesn’t expect the pipeline company's apparent ransom payment to serve as a catalyst for other ransomware gangs, he acknowledges the impact the attack had on pipeline operations could encourage those interested in causing similar mayhem. "I don't see paying this particular ransom as that different from others, in the sense of opening up critical infrastructure as a target," he says. "Indeed, I expect there to be a reduction in criminal attacks on critical infrastructure as this ransomware gang now has a big target on its back," says Reitinger, who's now president and CEO of the Global Cyber Alliance. "However, the effectiveness of the attack may well increase the incentive for other actors who want to disrupt rather than cash a check." The ransomware-as-a-service gang behind DarkSide announced Thursday it was shutting down its operation after losing access to part of its infrastructure. A ransomware attack by a nation-state or highly competent gang, such as DarkSide, is almost impossible to stop, Maor says. But he points out that such attacks aren't easy to pull off.
Today’s world is increasingly data-driven, and companies are amassing unique data assets that have numerous and valuable implications for analytics, modeling, insights, personalization and targeting purposes. Most companies don’t know how to turn their mountains of data into real value for their business or their customers. But the companies that do are rewarded with market valuations that far exceed their peers. Amazon, Nike, Progressive, Hitachi, and others recognize that winning in a digitally driven world is about using data as currency, and the CIO and CTO are key to making that happen. But what does “data as currency” mean? For a while now, we have heard a number leaders claim, “data is the new oil”. ... Data’s flexibility arguably gives data even more value than oil and other currencies, assuming companies can leverage it properly. For instance, many product companies sit on customer interaction data that could better predict demand to optimize their manufacturing output and supply chains. Internal data on employee job assignments, self-driven trainings, and micro-experiences could optimize talent versus upcoming opportunities.
In a microservice architecture, we should develop with failure in mind, especially when communicating with other services. In a monolith application, the application, as a whole, is up or down. But when this application is broken down into a microservice architecture, the application is composed of several services and all of them are interconnected by the network, which implies that some parts of the application might be running while others may fail. It is important to contain the failure to avoid propagating the error through the other services. Resiliency (or application resiliency) is the ability for an application/service to react to problems and still provide the best possible result. ... Elasticity (or scaling) is something that Kubernetes had in mind since the very beginning, for example running kubectl scale deployment myservice --replicas=5 command, the myservice deployment scales to five replicas or instances. Kubernetes platform takes care of finding the proper nodes, deploying the service, and maintaining the desired number of replicas up and running all the time.
Quote for the day:
"To get a feel for the true essence of leadership, assume everyone who works for you is a volunteer." -- Kouzes and Posner