Connected cars are one of the most significant factors driving these risks. These vehicles feature connectivity and include autonomous features, so attackers have more potential entry points and can do additional damage once inside. Self-driving vehicle sales could reach 1 million units by 2025 and skyrocket after, so these risks will grow quickly. Automakers also face risks from connected manufacturing processes. This trend has emerged in other sectors that have embraced IT/OT convergence. One-quarter of energy companies reported weekly DDoS attacks after implementing Industry 4.0 technologies. Their attack surfaces will increase as car manufacturers likewise implement these systems. ... One of the most important changes to make is segmenting networks. All IoT devices should run on separate systems from more sensitive endpoints and data to prevent lateral movement. Encrypting IoT communications and changing default passwords is also crucial. Manufacturers should update these systems regularly, including using updated anti-malware software.
A management plane empowers line developers to accomplish all of this without having a deep understanding or mastery of how to work native data plane configuration files and policies for firewalls, networking, API management and application performance management. With the management plane, platform ops teams can reduce the need for developers to build domain-specific knowledge outside the normal realm of developer expertise. For example, a management plane can have a menu of options or decision trees to determine what degree of availability and resilience an application requires, what volume of API calls can be issued against an app or service or where an app should be located in the cloud for data privacy or regulatory reasons. Equally important, the management plane can improve security by providing developers smart recommendations on good security practices or putting in place specific limits on key resources or infrastructure to ensure that developers shifting left don’t inadvertently expose their organization to serious risk.
Google and Microsoft are not the only tech companies that have started to take a more cautious approach to hiring. Earlier this year, Twitter initially issued a hiring freeze, then laid off 30% of its talent acquisition team earlier this month. At the end of June, Meta CEO Mark Zuckerberg was hostile on a call with employees, saying that “realistically, there are probably a bunch of people at the company who shouldn’t be here.” A month later, the company’s Q2 2022 financial results showed its first ever decline in revenue, with Zuckerberg telling investors that the economic climate looked even graver than it did the previous quarter. Around the same time, Apple also announced that, while the company will continue to invest in product development, it will no longer increase headcount in some departments next year. ... Research shows that employees want to be regularly offered training and the chance to develop new skills and are more likely to stay at a company if given those opportunities. The Great Resignation was a major topic of conversation in the first half of this year and, for companies that are no longer hiring, losing more employees is not an option.
It seems that given the sheer number of people needed in cybersecurity in the coming years could represent a way for historically underrepresented groups to find their way into tech. CJ Moses, CISO at AWS, spoke at the company keynote about the importance of diverse ways of thinking when it comes to keeping companies secure. “Another key part of our culture is having multiple people in the room with different outlooks. This could be introversion or extroversion, coming from different backgrounds or cultures, whatever enables your culture to be looking at things differently and challenging one another,” he said. He added that new ways of thinking can be transformative to cybersecurity teams. “I also think new hires can offer a team high levels of clarity because they don’t have years of bias or a group think baked into their mechanisms. So when you’re hiring, our best practices encourage being sensitive to the makeup of the interview panels, having multiple viewpoints and backgrounds, because diversity brings diversity.”
Ultimately, the massive increases in the three Vs have, by and large, resulted in inconsistent data management and protection policies in companies across the globe. So, traditional approaches to data management and protection are no longer sufficient. You need to be prepared to support empowering your IT department with the ability to meet today’s challenges. Consider solutions like autonomous data management, which uses AI-driven technology to fully automate self-provision, self-optimization and self-healing data management services for the vast amounts of data in the multi-cloud environments enterprises are migrating toward. ... The cloud makes a lot of sense for a lot of reasons. It’s flexible, with scalability and mobility; efficient, including its accessibility and speed to market; and cost-effective, as it includes pay-as-you-go models and helps eliminate hardware expenses. But it can be a fickle beast, especially in this ever-increasingly multi-cloud world. This refers to how enterprise data is being dispersed across on-premises data centers and the many private and public cloud service providers.
What we have seen is that has rapidly changed now over the last couple of years as calling is still obviously very important, but other collaboration technologies have entered the landscape and have become equally, if not arguably, more important. And the first one of those is video. The challenges, when you think about securing video, obviously a lot of folks have heard about unauthorized people [discovering] a meeting and [joining] it with an eye toward potentially disrupting the meeting or toward snooping on the meeting and listening in. And that has, fortunately, been addressed by most of the vendors. But the other real concern that we have seen arise from a security and especially a compliance perspective is meetings are generating a lot of content. ... If you are a CSO, obviously you have ultimate responsibility for collaboration security. But you also want to work with the collaboration teams to either delegate ownership of managing day-to-day security operations to those folks or working with them to get input into what the risks are and what are the possible mitigation techniques.
Developing with StereoKit shouldn’t be too hard for anyone who’s built .NET UI code. It’s probably best to work with Visual Studio, though there’s no reason you can’t use any other .NET development environment that supports NuGet. Visual Studio users will need to ensure that they’ve enabled desktop .NET development for Windows OpenXR apps, UWP for apps targeting HoloLens, and mobile .NET development for Oculus and other Android-based hardware. You’ll need an OpenXR runtime to test code against, with the option of using a desktop simulator if you don’t have a headset. One advantage of working with Visual Studio is that the StereoKit development team has provided a set of Visual Studio templates that can speed up getting started by loading prerequisites and filling out some boilerplate code. Most developers are likely to want the .NET Core template, as this works with modern .NET implementations on Windows and Linux and gets you ready for the cross-platform template under development.
The most important thing for aspirants is to get the fundamentals right before diving into data science and AI. Having a basic but intuitive understanding of linear algebra, calculus, and information theory helps to get a faster grip. Aspiring data scientists should not ignore fundamental principles of software engineering, in general, because nowadays the market is looking for full-stack data scientists with the capability to build an end-to-end pipeline, rather than just being a data science algorithm expert. ... My biggest challenge, which ultimately turned into my biggest achievement, was to start from scratch and build a world-class center of excellence in data science at HP India along with Niranjan Damera Venkata, Madhusoodhana Rao and Shameed Sait. This challenge was turned into an achievement by going into the start-up mode within HP. Though we were part of a large organisation, we made sure that the center of excellence operates the way a successful startup works by inculcating the culture of mutual respect and healthy competition, attracting and hiring best talents, and providing freedom and flexibility.
Confidential computing is of particular use to organizations that deal in sensitive, high value data — such as financial institutions, but also a wide variety of organizations. “We felt that confidential computing was going to be a very big thing be that it should be easy to use,” said Bursell, was then chief security architect in the office of Red Hat’s chief technology officer. “And rather than having to rewrite all the applications and learn how to use confidential computing, it should be simple.” But it wasn’t simple. Among the biggest puzzles: attestation, the mechanism by which a host measures a workload cryptographically and communicates that measurement to a third party. “One of the significant challenges that we have is that all the attestation processes are different,” said McCallum, who led Red Hat’s confidential computing strategy as a virtualization security architect. “And all of the technologies within confidential computing are different. And so they’re all going to produce different cryptographic caches, even if it’s the same underlying code that’s running on all.”
The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. It’s inspired by memory consolidation in our brain, where during sleep the high-level encodings of the day’s activities are “replayed” as the neurons reactivate. In other words, for the algorithms, new learning can’t completely eradicate past learning since we are mixing in stored past experiences. There are three styles for doing this. The most common style is “veridical replay,” where researchers store a subset of the raw inputs — for example, the original images for an object recognition task — and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is “generative replay.” Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.
Quote for the day:
"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones