Daily Tech Digest - August 08, 2022

Can Data Collection Persist Amid Post-Roe Privacy Questions?

The Supreme Court decision pushed data privacy discussions to the forefront once more, says Christine Frohlich, head of data governance at Verisk Marketing Solutions. “Those of us who have been working in the data industry have been thinking about this for a long time,” she says. “The regulations we’re seeing in California, and now what we’re seeing in Colorado, Connecticut, Virginia, and Utah have made this a real hot topic within our industry.” Companies have a fundamental responsibility, Frohlich says, to protect consumer privacy to the best of their ability. Customers may enjoy personalized experiences such as a digital interaction with a brand or having products marketed to them in a personal way, but she says they are also concerned about how their data is used. Federal legislation on data privacy might move forward faster in response to the Supreme Court decision, Frohlich says. The “right to be forgotten,” or a deletion requirement is flowing through state legislation and what is being proposed potentially on a federal perspective, she says. 


Can fintech innovation be a force for good social impact?

Putting purpose over profits requires fintech innovation to have some social purpose other than making money and just being a ‘good’ fintech, and we know that consumers are now actively looking for this purpose when choosing their financial institution. At the same time, modern consumers value experience over things and wish for fintechs to be more people-centric. Fintechs often create competitive advantage by being able to tailor offerings for niche markets. Consumers appreciate the personal approach, feel like they’re supporting positive change, and are increasingly looking for companies that align better with their values. If another financial institution does this in a better way, they won’t hesitate to switch providers. ... We know what makes a ‘good’ fintech, but a fintech that is a force for good needs to be reaching wider than the immediate financial communities needs. Fintechs can be innovative in their approaches and therefore have the ability and potential to help people in need. We’re already seeing examples of this where fintechs have encouraged financial inclusion,


Post-Quantum Safe Algorithm Candidate Cracked in an Hour on a PC

SIKE was among several algorithms that passed a NIST competition to identify and define standardized post-quantum algorithms. Because quantum computers represent a threat to current measures for securing information and data, the organization wanted to pinpoint algorithms that stood the best chance of withstanding attacks from quantum computers. In a blog post, Steven Galbraith, a University of Auckland mathematics professor and a leading cryptographic expert, explains how they accomplished the hack: “The attack exploits the fact that SIDH has auxiliary points and that the degree of the secret isogeny is known. The auxiliary points in SIDH have always been an annoyance and a potential weakness, and they have been exploited for fault attacks, the GPST adaptive attack, torsion point attacks, etc.” It’s not the end for SIKE. There may be ways to modify the algorithm to withstand these specific types of attacks. However, in an Ars Technica story, Jonathan Katz, professor in the department of computer science at the University of Maryland, said the news that a classical computer could crack an encryption scheme meant to be safe from quantum devices is troubling.


Infrastructure as a Code—Why Drift Management Is Not Enough

One of the most efficient ways of eliminating configuration drift is adopting infrastructure-as-code principles and using solutions such as Terraform. Instead of manually applying changes to sync the environments, which is inherently an error-prone process, you would define the environments using code. Code is clear, and is applied/run the same on any number of resources, without the risk of omitting something or reversing the order of some operations. By leveraging code versioning (e.g Git), an infrastructure-as-code platform also provides a detailed record, including both present and past configuration, which removes the issue of undocumented modifications and leaves an audit trail as an added bonus. Tools like Terraform, Pulumi, and Ansible are designed for configuration management and can be used to identify and signal drift, sometimes even correcting it automatically—so you get the chance of making things right before they have a real impact on your systems. As with any tool, the outcome depends on how you’re using it. Using a tool like Terraform does not make your company immune to configuration drift by itself.


Why Israel is moving to quantum computing

Quantum computing at scale is expected to revolutionize a range of industries, as it has the potential to be exponentially faster than classical computers at specific applications. Both China and the United States, among others, have already started national initiatives for this new paradigm. Israel launched its own initiative in 2018, for which in February it announced a $62 million budget. Israel is also placing bets on quantum. The Israel Innovation Authority (IIA) has selected Quantum Machines to establish its national Quantum Computing Center. It will host Israel’s first fully functional quantum computers for commercial and research applications. ... According to Quantum Machines, the Center’s computers will have a full-stack software and hardware platform capable of running any algorithm out of the box, including quantum error correction and multi-qubit calibration. As quantum computing is notorious for the various distinct approaches for creating qubits, the platform will also enable multiple qubit technologies, so that the center does not have to bet everything on one technology that perhaps may not turn out to be successful, which reduces the risk.


Meta is putting its latest AI chatbot on the web for the public to talk to

By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users who chat with BlenderBot will be able to flag any suspect responses from the system, and Meta says it’s worked hard to “minimize the bots’ use of vulgar language, slurs, and culturally insensitive comments.” Users will have to opt in to have their data collected, and if so, their conversations and feedback will be stored and later published by Meta to be used by the general AI research community. “We are committed to publicly releasing all the data we collect in the demo in the hopes that we can improve conversational AI,” Kurt Shuster, a research engineer at Meta who helped create BlenderBot 3, told The Verge. ... Crucially, says Mary Williamson, a research engineering manager at Facebook AI Research (FAIR), while Tay was designed to learn in real time from user interactions, BlenderBot is a static model. That means it’s capable of remembering what users say within a conversation  but this data will only be used to improve the system further down the line.


Computer Science Proof Unveils Unexpected Form of Entanglement

A striking new proof in quantum computational complexity might best be understood with a playful thought experiment. Run a bath, then dump a bunch of floating bar magnets into the water. Each magnet will flip its orientation back and forth, trying to align with its neighbors. It will push and pull on the other magnets and get pushed and pulled in return. Now try to answer this: What will be the system’s final arrangement? This problem and others like it, it turns out, are impossibly complicated. With anything more than a few hundred magnets, computer simulations would take a preposterous amount of time to spit out the answer. Now make those magnets quantum—individual atoms subject to the byzantine rules of the quantum world. As you might guess, the problem gets even harder. “The interactions become more complicated,” said Henry Yuen of Columbia University. “There’s a more complicated constraint on when two neighboring ‘quantum magnets’ are happy.” These simple-seeming systems have provided exceptional insights into the limits of computation, in both the classical and quantum versions. 


What is a QPU and how will it drive quantum computing?

A QPU, aka a quantum processor, is the brain of a quantum computer that uses the behaviour of particles like electrons or photons to make certain kinds of calculations much faster than processors in today’s computers. QPUs rely on behaviours like superposition, the ability of a particle to be in many states at once, described in the relatively new branch of physics called quantum mechanics. By contrast, CPUs, GPUs and DPUs all apply principles of classical physics to electrical currents. That’s why today’s systems are called classical computers. ... Thanks to the complex science and technology, researchers expect the QPUs inside quantum computers will deliver amazing results. They are especially excited about four promising possibilities. First, they could take computer security to a whole new level. Quantum processors can factor enormous numbers quickly, a core function in cryptography. That means they could break today’s security protocols, but they can also create new, much more powerful ones. In addition, QPUs are ideally suited to simulating the quantum mechanics of how stuff works at the atomic level. 


What metaverse infrastructure can bring to hospitals

When it comes to the most prominent use cases being explored alongside Avanade in the aim to drive value, this depends on the operational areas and scenarios being mapped out, according to dos Anjos. It’s vital that implementation is conducted in line with the specific needs and goals of each client. “Today, the most common demands are for remote assistance, and guided assistance,” he said. “Another common scenario being looked at is helping doctors prep for surgery. This is aided via a surgical plane powered by Microsoft HoloLens, which includes a 3D projection of the patient and all the data necessary in one interface. Users can interact with this without needing to touch anything.” For Amaral, opportunities in this trending area of tech can also expand to education purposes, with medical school lecturers being able to record footage of surgeries for students to view and interact with. “This would provide a 360-degree view of the surgery and allow students to zoom in and out where needed,” he explained. “This makes for a more immersive experience for medical students.”


IT leadership: You gotta have H.E.A.R.T.

Today’s best digital leaders have adapted their leadership playbooks for the times. If you go back and listen to the Tech Whisperers podcast episodes, you’ll hear the same themes and the same leadership wisdom over and over again. What’s the common denominator? Humility, Empathy, Adaptability, Resilience, and Transparency: H.E.A.R.T. There’s something palpable in how the CIOs I've spoken to balance high EQ leadership, holding people accountable, having the hard conversations, and delivering results. These are business-first executives who anticipate, innovate, and drive results – and they don’t get distracted by bright shiny objects. ... “As a leader, it’s important to understand that no one person, group, or culture has all the knowledge, skills, or information necessary for success in business,” Smith says. “That’s why at ELC we always say that people are our greatest asset. Diversity – not just in backgrounds but in experiences and perspectives – results in greater innovation and better problem-solving across an organization.”



Quote for the day:

"Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall." -- Stephen Covey

Daily Tech Digest - August 05, 2022

Auto Industry at Higher Risk of Cyberattacks in 2023

Connected cars are one of the most significant factors driving these risks. These vehicles feature connectivity and include autonomous features, so attackers have more potential entry points and can do additional damage once inside. Self-driving vehicle sales could reach 1 million units by 2025 and skyrocket after, so these risks will grow quickly. Automakers also face risks from connected manufacturing processes. This trend has emerged in other sectors that have embraced IT/OT convergence. One-quarter of energy companies reported weekly DDoS attacks after implementing Industry 4.0 technologies. Their attack surfaces will increase as car manufacturers likewise implement these systems. ... One of the most important changes to make is segmenting networks. All IoT devices should run on separate systems from more sensitive endpoints and data to prevent lateral movement. Encrypting IoT communications and changing default passwords is also crucial. Manufacturers should update these systems regularly, including using updated anti-malware software. 


Why Developers Need a Management Plane

A management plane empowers line developers to accomplish all of this without having a deep understanding or mastery of how to work native data plane configuration files and policies for firewalls, networking, API management and application performance management. With the management plane, platform ops teams can reduce the need for developers to build domain-specific knowledge outside the normal realm of developer expertise. For example, a management plane can have a menu of options or decision trees to determine what degree of availability and resilience an application requires, what volume of API calls can be issued against an app or service or where an app should be located in the cloud for data privacy or regulatory reasons. Equally important, the management plane can improve security by providing developers smart recommendations on good security practices or putting in place specific limits on key resources or infrastructure to ensure that developers shifting left don’t inadvertently expose their organization to serious risk.


Tech hiring enters the Big Freeze

Google and Microsoft are not the only tech companies that have started to take a more cautious approach to hiring. Earlier this year, Twitter initially issued a hiring freeze, then laid off 30% of its talent acquisition team earlier this month. At the end of June, Meta CEO Mark Zuckerberg was hostile on a call with employees, saying that “realistically, there are probably a bunch of people at the company who shouldn’t be here.” A month later, the company’s Q2 2022 financial results showed its first ever decline in revenue, with Zuckerberg telling investors that the economic climate looked even graver than it did the previous quarter. Around the same time, Apple also announced that, while the company will continue to invest in product development, it will no longer increase headcount in some departments next year. ... Research shows that employees want to be regularly offered training and the chance to develop new skills and are more likely to stay at a company if given those opportunities. The Great Resignation was a major topic of conversation in the first half of this year and, for companies that are no longer hiring, losing more employees is not an option.


Cybersecurity could offer a way for underrepresented groups to break into tech

It seems that given the sheer number of people needed in cybersecurity in the coming years could represent a way for historically underrepresented groups to find their way into tech. CJ Moses, CISO at AWS, spoke at the company keynote about the importance of diverse ways of thinking when it comes to keeping companies secure. “Another key part of our culture is having multiple people in the room with different outlooks. This could be introversion or extroversion, coming from different backgrounds or cultures, whatever enables your culture to be looking at things differently and challenging one another,” he said. He added that new ways of thinking can be transformative to cybersecurity teams. “I also think new hires can offer a team high levels of clarity because they don’t have years of bias or a group think baked into their mechanisms. So when you’re hiring, our best practices encourage being sensitive to the makeup of the interview panels, having multiple viewpoints and backgrounds, because diversity brings diversity.”


3 Things The C-Suite Should Know About Data Management And Protection

Ultimately, the massive increases in the three Vs have, by and large, resulted in inconsistent data management and protection policies in companies across the globe. So, traditional approaches to data management and protection are no longer sufficient. You need to be prepared to support empowering your IT department with the ability to meet today’s challenges. Consider solutions like autonomous data management, which uses AI-driven technology to fully automate self-provision, self-optimization and self-healing data management services for the vast amounts of data in the multi-cloud environments enterprises are migrating toward. ... The cloud makes a lot of sense for a lot of reasons. It’s flexible, with scalability and mobility; efficient, including its accessibility and speed to market; and cost-effective, as it includes pay-as-you-go models and helps eliminate hardware expenses. But it can be a fickle beast, especially in this ever-increasingly multi-cloud world. This refers to how enterprise data is being dispersed across on-premises data centers and the many private and public cloud service providers.


5 best practices for secure collaboration

What we have seen is that has rapidly changed now over the last couple of years as calling is still obviously very important, but other collaboration technologies have entered the landscape and have become equally, if not arguably, more important. And the first one of those is video. The challenges, when you think about securing video, obviously a lot of folks have heard about unauthorized people [discovering] a meeting and [joining] it with an eye toward potentially disrupting the meeting or toward snooping on the meeting and listening in. And that has, fortunately, been addressed by most of the vendors. But the other real concern that we have seen arise from a security and especially a compliance perspective is meetings are generating a lot of content. ... If you are a CSO, obviously you have ultimate responsibility for collaboration security. But you also want to work with the collaboration teams to either delegate ownership of managing day-to-day security operations to those folks or working with them to get input into what the risks are and what are the possible mitigation techniques. 


Build .NET apps for the metaverse with StereoKit

Developing with StereoKit shouldn’t be too hard for anyone who’s built .NET UI code. It’s probably best to work with Visual Studio, though there’s no reason you can’t use any other .NET development environment that supports NuGet. Visual Studio users will need to ensure that they’ve enabled desktop .NET development for Windows OpenXR apps, UWP for apps targeting HoloLens, and mobile .NET development for Oculus and other Android-based hardware. You’ll need an OpenXR runtime to test code against, with the option of using a desktop simulator if you don’t have a headset. One advantage of working with Visual Studio is that the StereoKit development team has provided a set of Visual Studio templates that can speed up getting started by loading prerequisites and filling out some boilerplate code. Most developers are likely to want the .NET Core template, as this works with modern .NET implementations on Windows and Linux and gets you ready for the cross-platform template under development. 


The Data science journey of Amit Kumar, senior enterprise architect-deep learning at NVIDIA

The most important thing for aspirants is to get the fundamentals right before diving into data science and AI. Having a basic but intuitive understanding of linear algebra, calculus, and information theory helps to get a faster grip. Aspiring data scientists should not ignore fundamental principles of software engineering, in general, because nowadays the market is looking for full-stack data scientists with the capability to build an end-to-end pipeline, rather than just being a data science algorithm expert. ... My biggest challenge, which ultimately turned into my biggest achievement, was to start from scratch and build a world-class center of excellence in data science at HP India along with Niranjan Damera Venkata, Madhusoodhana Rao and Shameed Sait. This challenge was turned into an achievement by going into the start-up mode within HP. Though we were part of a large organisation, we made sure that the center of excellence operates the way a successful startup works by inculcating the culture of mutual respect and healthy competition, attracting and hiring best talents, and providing freedom and flexibility.


Confidential Computing with WebAssembly

Confidential computing is of particular use to organizations that deal in sensitive, high value data — such as financial institutions, but also a wide variety of organizations. “We felt that confidential computing was going to be a very big thing be that it should be easy to use,” said Bursell, was then chief security architect in the office of Red Hat’s chief technology officer. “And rather than having to rewrite all the applications and learn how to use confidential computing, it should be simple.” But it wasn’t simple. Among the biggest puzzles: attestation, the mechanism by which a host measures a workload cryptographically and communicates that measurement to a third party. “One of the significant challenges that we have is that all the attestation processes are different,” said McCallum, who led Red Hat’s confidential computing strategy as a virtualization security architect. “And all of the technologies within confidential computing are different. And so they’re all going to produce different cryptographic caches, even if it’s the same underlying code that’s running on all.”


The Computer Scientist Challenging AI to Learn Better

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. It’s inspired by memory consolidation in our brain, where during sleep the high-level encodings of the day’s activities are “replayed” as the neurons reactivate. In other words, for the algorithms, new learning can’t completely eradicate past learning since we are mixing in stored past experiences. There are three styles for doing this. The most common style is “veridical replay,” where researchers store a subset of the raw inputs — for example, the original images for an object recognition task — and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is “generative replay.” Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods. 



Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones

Daily Tech Digest - August 04, 2022

Artificial intelligence makes project planning better

Unlike neural networks, expert systems do not require up-front learning, nor do they necessarily require large amounts of data to be effective. Yes, expert systems can and do absolutely learn and get smarter over time (by adjusting or adding rules in the inference engine) but they have the benefit of not needing to be “trained up front” in order to function correctly. Capturing planning knowledge can be a daunting task and arguably very specific and unique to individual organizations. If all organisations planned using the same knowledge e.g., standard sub-nets, then we could simply put our heads together as an industry and establish a global “planning bible” from which we could all subscribe. This of course isn’t the case and so for a neural network to be effective in helping us in project planning, we would need to mine a lot of data. Even if we could get our hands on it, it wouldn’t be consistent enough to actually help with pattern recognition. Neural networks have been described as black boxes — you feed in inputs, they establish algorithms based on learned patterns and then spit out an answer.


Trending Programming Language to Learn

Generally, Python is seen as an easy-to-learn software language thanks to its simple syntax, extensive library of guidelines and toolkits, and its compatibility with other prominent programming languages, including C and C++. Its popularity is demonstrated by its ranking in TIOBE and PYPL Index in 2021: Python was the leading programming language. Companies like Intel, Facebook, Spotify, and Netflix are still using the best python developers to take advantage of this language’s extensive libraries. ... Although GO is somehow similar to C in syntax, it’s a unique language that offers excellent memory protection and management functionality. It is likely that GO’s popularity will continue to rise as it’s used to design systems that use artificial intelligence, just like Python. ... It's no wonder Java is so popular; getting started with this language is fairly simple, and it boasts a high level of security. Furthermore, it’s capable of handling large amounts of data. The technology is used in a variety of applications on almost all major systems like mobile, desktop, web, artificial intelligence, cloud applications, and more.


Why Microsoft Azure and Google Cloud love Ampere Computing’s server chips

“Ampere is designing for a completely different goal,” to its rivals, says Dylan Patel, a chip industry analyst from SemiAnalysis. “Intel is targeting a wider net of people, versus what Ampere is going for, which allows them to make certain sacrifices and gain benefits from that,” he says. Though the company’s chips have a “weaker and smaller CPU core” than some of the x86 processors designed by Intel and AMD for servers, Patel says this means the chips themselves are smaller, and that as a result power usage is lower and the chips are more efficient. “That’s a big deal” for data centre operators, he says. Patel adds: “Renee realised this needed to be done much earlier than most of the cloud providers themselves. Amazon figured out they needed to build CPUs like this, but the others did not, and as a result, most of them are now in a position where they need to buy the technology from someone else. That’s why you find Ampere in every major cloud now.”


Digital transformation: How to guide innovation leaders

Digital Trailblazers are lifelong learners and can start their careers in product management, technology, security, or data roles. What sets Digital Trailblazers apart is their ability to ask the questions that challenge people’s thinking and get into the weeds around customer experience, data quality issues, or how to integrate emerging technologies. But finding Digital Trailblazers isn’t easy, and guiding them requires leadership’s commitment to empowering their creativity and collaboration. CIOs who dedicate themselves and their lieutenants to seek and guide aspiring transformation leaders are setting their entire organization up for success for years to come. Once you identify these leaders, you must encourage them to step out of their comfort zones because many will soon be experiencing firsts such as presenting to leadership, responding to detractors, or making tough calls in setting priorities. In the book, I tell the stories of what it feels like to be a Digital Trailblazer, knowing they will face many new experiences. I’ll share an excerpt from the chapter “Buried in bad data.” 


6 key advantages of ERP and CRM software integration

Typically, businesses purchase and deploy ERP and CRM systems separately. However, if your ERP and CRM systems have their own databases, you will consistently have to worry about keeping them synchronised. Whether it’s a CRM user from customer service or an ERP user from billing who updates a customer’s account, any changes implemented in one system will have to be transferred to the other. Considering this is a manual process, having to wait for a database to update before you can, for example, process bills, replenish inventory levels and arrange product returns for customers, will result in slower operations and an increased risk of database errors. Applying an integrated CRM functionality to your ERP solution will ensure both systems share one database; meaning updates in either system are visible instantaneously. Customers can be billed faster and any product returns can be automated between systems; providing your business with clearer visibility into all stages of your business’ sales process.


Can You Recover Losses Sustained During a Cloud Outage?

Even if the providers do have insurance, the terms of those policies are unlikely to cover more than a fraction of the costs incurred by the clients. “Negotiate how much risk is being held by the company and how much risk is being retained by the cloud service provider,” advises Michael Phillips, chief claims officer of cyber insurance company Resilience. “It's an unfortunate fact of life right now that many of the major cloud service providers are willing to accept none of the risk of their own failure.” The public cloud is a multi-tenant environment, further complicating the issue of responsibility. “Many cloud providers currently do not offer meaningful SLAs, arguing the application must meet the demands of multiple customers,” says Lisa Rovinsky, partner at full-service law firm Culhane Meadows. “I think this power structure will be changing as customers become more sophisticated and hybrid cloud solutions develop.” This puts the onus on clients to ensure that their cloud agreements are as airtight as possible from the get-go. Boilerplate contracts are unlikely to offer even cursory protection, so customization is increasingly the name of the game.


6 Smart Ways to Optimize Your Tech Stack

Consolidation can be thought of through the lens of technical debt. While some technical debt may be intentional, all technical debt creates added complexity that gets in the way of organizational agility. Recent research found that while 94% of organizations recognize the impact of technical debt digital transformation, less than half have a strategy in place to manage it. Looking for ways to eliminate technical debt by consolidating solutions and letting go of those that are highly customized or out of support offers a clear path to delivering measurable business value. ... EA can align with DevOps in two complementary ways. First, the development tech stack must be reflected in the overall organizational tech stack. On this front, collaboration with head of development and software architects are key. Second, EA can use their tools and expertise to help dev teams manage their one tech landscape, particularly when it comes to microservices. A microservice catalog can serve in this case as both an essential tool for DevOps, particularly when it promotes reuse, and a natural extns


Hacking Concerns Delay Balloting for New UK Prime Minister

Online voting has been changed so that instead of a Tory party member being able to use their unique code multiple times to change their vote, the code will instead be deactivated after they initially vote. "The part that caused particular concern was being able to change your vote after submission," says Alan Woodward, professor of computer science at the University of Surrey. NCSC, which is the public-facing arm of Britain's security, intelligence and cyber agency, GCHQ, confirms that it has been providing guidance to the Tory party. "As you would expect from the U.K.'s national cybersecurity authority, we provided advice to the Conservative Party on security considerations for online leadership voting," an NCSC spokesperson tells Information Security Media Group. "Defending U.K. democratic and electoral processes is a priority for the NCSC, and we work closely with all parliamentary political parties, local authorities and MPs to provide cybersecurity guidance and support." The Conservative Party acknowledged the cybersecurity center's input.


How to succeed as an enterprise architect: 4 key strategies

Technical debt should be used intentionally to make incremental gains, not accumulated from neglect. Every architect must decide how to deal with debt—both addressing it and taking it on—to succeed in their role. Get comfortable with technical debt as a tool. The key questions that need to be addressed are when to take on debt and when to pay it off. Take on debt when the future of a product or feature is uncertain. Whether for a proof of concept or a minimum viable product, use debt to move fast and prove or realize value quickly before committing more cycles to make it robust. Architects can minimize the impact of debt by first solidifying interfaces. Changes to interfaces impact users. Consequently, these changes are not only sometimes technically difficult but can also be logistically challenging. And the more difficult it is to address debt, the less likely it is to be managed. Pay down technical debt when its value proposition turns negative and has a measurable impact on the business. That impact could come in the form of decreased engineering velocity, infrastructure brittleness leading to repeated incidents, monetary cost, or many other related effects. 


6 ways your cloud data security policies are slowing innovation – and how to avoid that

Some security professionals may consider this first pitfall as irrelevant to their organization, as they allow data to be freely moved or modified across cloud environments without restrictions. While beneficial for business purposes, this approach ignores the exponential growth in data and its tendency to spread across data stores and environments, with little ability to locate where it resides. This lack of visibility and control will inevitably lead to loss of what may be sensitive, personal or customer data in the process. If data is the fuel of many of our business processes, then losing some of it means that you’re running low on gas. Innovative teams require access to data. Whether it’s data scientists who are creating new machine learning algorithms, threat researchers researching new trends, marketing or product management teams who need to understand customer behavior or other stakeholders – innovating without data is like trying to bake without an oven. Managing organizational access to data may be critical to ensure that it isn’t abused or lost but creating stringent access control policies and boundaries around data usage creates what are essentially data silos, once again restricting innovation.



Quote for the day:

"Failures only triumph if we don't have the courage to try again." -- Gordon Tredgold