Daily Tech Digest - May 05, 2023

Data is choking AI. Here’s how to break free.

As enterprises deepen their embrace of AI and other data-driven, high-performance computing, it’s critical to ensure that performance and value are not starved by underperforming processing, storage and networking. Here are key considerations to keep in mind. Compute. When developing and deploying AI, it’s crucial to look at computational requirements for the entire data lifecycle: starting with data prep and processing (getting the data ready for AI training), then during AI model building, training, and inference. Selecting the right compute infrastructure (or platform) for the end-to-end lifecycle and optimizing for performance has a direct impact on the TCO and hence ROI for AI projects. End-to-end data science workflows on GPUs can be up to 50x faster than on CPUs. To keep GPUs busy, data must be moved into processor memory as quickly as possible. Depending on the workload, optimizing an application to run on a GPU, with I/O accelerated in and out of memory, helps achieve top speeds and maximize processor utilization.


New leadership for a new era of thriving organizations

Leading companies today seek to become learning organizations that are continually evolving, exploring, ideating, experimenting, scaling up, executing, scaling down, and exiting across many different activities in parallel. By accelerating change and allowing for positive surprises and innovations to flourish, they consistently outperform those companies that focus instead on always trying to deliver the “perfect” plan. We are in the midst of a profound shift in how work gets done, one that asks leaders to go beyond being controllers with a mindset of certainty to becoming coaches who operate with a mindset of discovery and foster continual rapid exploration, execution, and learning. Leaders and leadership teams can learn how to set and work toward outcomes rather than traditional key performance indicators; to foster rapid experimentation and learn from both successes and setbacks; and to manage risk differently, through testing, learning, and fast adaptation. The leadership practices enabling this shift include the following:operating in short cycles of decision, action, and learning.


The Fourth Industrial Revolution is here. Here’s what it means for the way we work

Herein lies the double-edged sword of the Fourth Industrial Revolution. Although smart machines and artificial intelligence are predicted to bring unimaginable efficiencies, they will do so by increasingly replacing a wide swath of existing human jobs. While historically jobs have always been around for human beings through technological revolutions, we have never had a technological revolution that has been capable of displacing so many human beings and so much human brain power as the one we are transitioning through now. According to a report from Oxford Economics, a global forecasting and quantitative analysis firm, smart machines are expected to displace about 20 million manufacturing jobs across the world over the next decade, including more than 1.5 million in the U.S. Other studies predict that smart machines, robotics, artificial intelligence, blockchain technology, 3D printing, and automation will put 20% to 40% of existing jobs at risk over the next decades. And a report from the Brookings Institution finds that 25% of U.S. workers will face “high exposure” and risk being displaced over the upcoming few decades. 


Even Amazon can't make sense of serverless or microservices

Beyond celebrating their good sense, I think there's a bigger point here that applies to our entire industry. Here's the telling bit: "We designed our initial solution as a distributed system using serverless components... In theory, this would allow us to scale each service component independently. However, the way we used some components caused us to hit a hard scaling limit at around 5% of the expected load." That really sums up so much of the microservices craze that was tearing through the tech industry for a while: IN THEORY. Now the real-world results of all this theory are finally in, and it's clear that in practice, microservices pose perhaps the biggest siren song for needlessly complicating your system. And serverless only makes it worse. What makes this story unique is that Amazon was the original poster child for service-oriented architectures. The far more reasonable prior to microservices. An organizational pattern for dealing with intra-company communication at crazy scale when API calls beat scheduling coordination meetings. SOA makes perfect sense at the scale of Amazon. 


The impact of ChatGPT on multi-factor authentication

As adoption of AI/ML-backed tools continues to grow, it will be important to focus on key ways to mitigate the risks associated with their use. When the efficacy of identity measures that companies have trusted for decades such as voice verification and video verification erodes, strongly linked electronic identity is even more important. Phishing-resistant credential solutions such as security keys — that are hardware-backed and purpose-built around cryptographic principles — excel in these scenarios. Security keys that support FIDO2 also ensure that these credentials are tied to a specific relying party. This binding prevents attackers from preying on simple human error. With security keys, credentials are securely stored in hardware which prevents those credentials from being transferred to another system without the user’s knowledge or by accident. The use of FIDO2 authenticators also greatly reduces the efficacy of social engineering through phishing as users cannot be tricked into vending a one-time password to an attacker, or have SMS authentication codes stolen directly through a SIM swapping attack.


Three Powerful Tactics Entrepreneurs Use For Instant Confidence

Tried and tested by entrepreneurs who have faced nerves and self-doubt, reminding yourself of what you have already achieved can give your confidence levels the boost they need. Create a metaphorical cookie jar of all your business and life wins and dip in for instant assurance. Samantha from ICI CARE keeps a list of her past wins and her big picture vision on the wall where she works, ensuring they are at eye level. "By having that reminder, I win over my brain before it spirals down,” she said. “Self-doubt is normal but I keep my focus and energy on achievement.” ... Confidence is a state of mind, which means it’s also a choice. Dr Amanda Foo-Ryland, founder of Your Life Live It, knows this well, explaining that it’s also, “about how you choose to see a new situation.” She knows, “I can either be confident or choose not to be.” Like Sarceno, she incorporates visualisation into the way ahead. “If I choose to be confident, I imagine the event and see myself in it being confident, being the person I want to be. I observe myself in the movie in my head.” 


White House unveils AI rules to address safety and privacy

This new effort builds on previous attempts by the Biden administration to promote some form of responsible innovation, but to date Congress has not advanced any laws that would rein in AI. In October, the administration unveiled a blueprint for a so-called “AI Bill of Rights” as well as an AI Risk Management Framework; more recently, it has pushed for a roadmap for standing up a National AI Research Resource. The measures don’t have any legal teeth; they are just more guidance, studies and research "and they’re not what we need now," according to Avivah Litan, a vice president and distinguished analyst at Gartner Research. “We need clear guidelines on development of safe, fair and responsible AI from the US regulators,” she said. “We need meaningful regulations such as we see being developed in the EU with the AI Act. ... US regulators need to step up their game and pace." In March, Senate Majority Leader Chuck Schumer, D-NY, announced plans for rules around generative AI as ChatGPT surged in popularity. Schumer called for increased transparency and accountability involving AI technologies.


Court Dismisses FTC Complaint Against Data Broker Kochava

The FTC in its lawsuit filed last August against Idaho-based Kochava said the company invades consumers' privacy by selling advertisers geolocation data sets of mobile phone holders tied to a unique ID. That information could be used to identify individuals who have visited abortion clinics, mental health providers and other sensitive locations, the agency said. Kochava filed its own lawsuit in the same Idaho federal court weeks before the FTC's action, as a bid to preemptively counter the federal agency. The company also filed a motion last October to dismiss the FTC's lawsuit. Winmill wrote in her Thursday ruling that nothing prevents the FTC from asserting that an invasion of privacy by itself can constitute a legitimate cause for suing. The agency failed, he said, by not establishing that Kochava's business practices constitute substantial injury to consumers. "The privacy concerns raised by the FTC are certainly legitimate. Disclosing where a person has been every fifteen-minutes over a seven-day period could undoubtedly reveal information that the person would consider private, such as their travel habits, medical conditions, and social or religious affiliations," he wrote.


The Merck appeal: cyber insurance and the definition of war

The war exclusion was found to be not applicable, and the court used the insurer’s own words to detail the “why” behind the denial. When read by a layman such as me, it appears the judges believed the insurers had ample time to adjust their policy dynamics and didn’t get around to it. ... That said, when a nation’s intelligence entities run covert operations, which Russia does on a regular basis, the goal of the government at hand is to always maintain plausible deniability any illegal acts. Could the NotPetya attack have been sponsored by the Russian Federation? Absolutely, and indeed, Kroll Cyber Security, the cyber consultant for the insurers, opined before the court “with high confidence” that the attack was “orchestrated by actors working for or on behalf of the Russian Federation.” Yet, one should note that when the US Department of Justice had the opportunity to pin the tail on that same donkey, they demurred. Thus, if a national government is not going to attribute nation-state sponsorship to an attack, then it will be most difficult for an insurance entity to successfully do so within the courts without explicit verbiage in the cybersecurity exclusions.


How the influence of data and the metaverse will revolutionize businesses and industries

From machine and building performance to energy and emissions, data is the crucial link between the physical and digital worlds. It’s also the key to solving efficiency and sustainability challenges that are now more urgent than ever. If the metaverse is meant to transform business and industries, it must be built on solid data foundations. ... Digital transformation started with connecting physical assets via IoT and edge controls. Its disruptive potential has proven to carry operational and energy efficiency across all levels of an enterprise. When we introduce powerful software capabilities and start leveraging the generated data, we can create virtual representations of the real world by combining simulation, augmented reality (AR), data sharing, and visualization all at once. ... It seems that all these and many more possible applications have something in common: they are all about bringing together technologies to address challenges of the physical world, by giving real people the means to learn, collaborate, act, and essentially create value through a virtual, digitally augmented space.



Quote for the day:

"You always believe in other people. But that's easy. Sooner or later you have to believe in yourself." -- Gary, The Muppets

Daily Tech Digest - May 04, 2023

How CEOs Can Become Co-responsible for Cyber Resilience

Move from blind trust to informed trust. Many of the CEOs we interviewed admitted to blindly trusting their cyber and technology teams. But CEOs who had experienced a serious cyberattack said that, in hindsight, they wish they had personally known and understood more. So instead of blindly trusting their technology teams, CEOs should move to a state of “informed trust” about their enterprise’s state of cyber resilience. One way to achieve that is to seek independent, unbiased advice reporting results directly to the CEO, similar to important financial audits. Embrace the “preparedness paradox”. During our interviews, we asked CEOs to rate their companies’ preparedness for a serious cyberattack on a scale from one to ten. Only a few could be persuaded to give a number; many either dodged the question or openly said that they did not know. Of those who responded, the majority rated their preparedness relatively high. And therein lies a problem. As it turns out, the CEOs with cyberattack experience acknowledged that they, too, had previously believed they were well prepared – before recognising their misperception in hindsight.


How To Build And Sustain Trust: The Secret To Team And Organizational Effectiveness

Be the employee you wish to see! When leaders hold themselves to the same standards they do their employees, they create a culture of trust and accountability. These exemplary qualities may differ between individual managers, but “model behavior” generally entails being transparent and honest, honoring commitments and treating everyone with respect and dignity. In doing so, leaders foster a greater sense of care and sincerity among their team. ... Leaders who communicate effectively demonstrate that they value their employees and are committed to keeping them informed. Effective communication also helps to prevent misunderstandings and conflicts, which both damage trust. Communication is best when it’s clear, transparent and concise. Honoring your employees’ time heightens their sense of your reliability, too. Managers should always be willing to listen to their employees and be open to their feedback. Communicate regularly, whether it be through team meetings, one-on-one conversations or email updates.


Boards Are Having the Wrong Conversations About Cybersecurity

Our findings suggest that the CISO-board disconnect is exacerbated by their unfamiliarity with each other on a personal level (they do not spend enough time together to get to know each other and their attitudes and priorities in a productive way). Also contributing to this disconnect is the CISO’s difficulty in translating technical jargon into business language, such as risk, reputation, and resilience. ... Instead, the conversation needs to focus on resilience. We must assume, for planning purposes, that we will experience a cyberattack of some type, and prepare our organizations to respond and recover with minimal damage, cost, and reputational impact. For example, instead of going into detail in a board meeting on how our organization is set up to respond to an incident, we must focus on what the biggest risk might be and how we are prepared to quickly recover from the damage should that situation happen. To change their focus to resilience as the primary goal of cybersecurity, directors could ask their operating leaders to create a vision for how the company will respond and recover when an attack occurs. 


How an enterprise service mesh will ensure zero trust security for multi-cloud applications

Without an enterprise service mesh platform, contemporary applications with a microservices-based architecture would have a much larger overhead in terms of design, development, and maintenance. Right from maintaining separate business logic and configuration specs to complex authentication and authorization implementations that are custom to the application, ... A service mesh improves the microservices architecture as it enables companies or individuals to create robust enterprise applications, made up of many such microservices on a hosting platform of their choice. An enterprise service mesh solution allows developers to focus on adding business value to each service they build, rather than worrying about how each service communicates with the rest. For DevOps teams that have an established production continuous integration and continuous deployment (CI/CD) pipeline, a service mesh can be essential for programmatically deploying apps and application infrastructure to manage source code and test automation tools seamlessly.


Addressing OT security under the National Cybersecurity Strategy

Lessons learned from modernizing IT unfortunately won’t apply to OT because of OT’s unique operating requirements. Efforts taken under the NCS must first consider each individually and then together. For instance, when an IT system reaches end-of-life, an agency must decide to either continue using it at risk, pay for extended manufacturer service, or sunset and replace it all together. Each option has pros and cons, but agencies at least have options and can usually plan accordingly—sunset dates will be known in advance, diminishing potential impacts of the time variable. ... Because of how OT systems were designed, rip-and-replace isn’t a viable approach for them. Legacy OT systems were built on the engineering paradigm of twenty years ago—to be long-lasting and achieve the functional goals of monitoring and controlling critical processes. Connectivity wasn’t a functional requirement, so neither was security. Times have changed since these systems were put in place and security risks must now be a consideration. Further, because of the nature of what OT systems do, continuity requires that they can’t just be turned off and replaced with a new, more secure system. 


Accelerate Innovation and Create Business Value with IT Democratization

Over the next two years, it's expected that employees who aren't full-time technical specialists will produce close to 80% of IT services and goods. These non-IT employees who develop their own tech solutions work mostly in business roles, but they recognize the benefits of technology and want to use it independently. Although this signifies a shift in authority toward business divisions, IT executives should view this new dynamic as an advantage, not a risk. By embracing the trend and helping business users take on technical initiatives, IT teams can free up the time and resources they need to manage their own growing queue of initiatives. Additionally, when multiple departments within a company hire new "citizen developers," creativity accelerates exponentially. Many IT services offered now are designed to provide users with more autonomy while lightening the load on technical experts. Thanks to Software-as-a-Service (SaaS) solutions with service-based models, IT professionals no longer have to devote time installing, deploying, and maintaining software tools. 


Data Sovereignty, Compliance Shape IT Leadership

“The topic of data sovereignty is more urgent than ever as we try to counter-balance these considerations,” explains Jason Conyard, CIO of VMware. “Privacy and privacy-adjacent laws is also an ever-growing topic not only on a national level, but on a consumer level as well.” He points out customers want assurances about their data -- how it is used, who it is shared with, and how it is protected. “If a company can demonstrate competency in meeting its commitments, it builds trust and customer loyalty and ultimately leads to increased profitability,” Conyard says. Spencer Kimball, co-founder and CEO of Cockroach Labs, adds while risk mitigation is the obvious impetus for change, a strategic embrace of the challenge of data sovereignty can pave the way to more frictionless expansion into new markets. "Very few businesses in today’s connected digital economy are not looking towards a future of global expansion,” he points out. He says with the inevitability of new regulations always on the horizon, it’s increasingly important to build on infrastructure designed to overcome these challenges.


AIOps: Site Reliability Engineering at Scale

AIOps (Artificial Intelligence for IT Operations) can significantly improve cross-functional engagement in a business. In traditional IT operations, different teams may work in silos, resulting in communication gaps, misunderstandings, and delays in issue resolution. AIOps can help bridge these gaps and facilitate collaboration between different teams. One way AIOps improves cross-functional engagement is through its ability to provide real-time insights and analytics into various IT processes. This enables different teams to access the same information, which can help improve communication and reduce misunderstandings. For example, the data provided by AIOps can help IT teams and business stakeholders identify potential issues and proactively take action to prevent them from occurring, leading to better outcomes and higher customer satisfaction. Another way AIOps improves cross-functional engagement is through its ability to automate various IT processes. By automating routine tasks, AIOps can free up time for IT teams to focus on strategic initiatives, such as improving customer experiences and innovating new solutions. 


The hidden security risks in tech layoffs and how to mitigate them

When an employee leaves a business, abruptly or not, the potential for data or code loss can significantly impact the organization's security posture. While most employees don't think of themselves as a cybersecurity risk, a study done by DTEX Systems shows that “roughly 50% of people in any organization” save confidential intellectual property from projects to which they’ve contributed. They do it just in case they leave the company, Mahbod says. What’s even more concerning is that 12% of these employees take data from projects they haven't even worked on. Enterprises should realize that “the real risk is coming from within their own corporate firewall,” Mahbod adds. “The future of data loss prevention and protection is human-centric, not data-centric.” Businesses should monitor data loss activities and implement policies to limit unnecessary data movement within and outside of the organization. This could include enforcing device lockdowns on file uploads to personal webmail, file-sharing sites, or USB ports to prevent successful exfiltration events, especially those that occur from layoffs.


On the verge of a digital banking revolution in the Philippines

While the Philippines presents highly attractive opportunities for expansion, the way foreign firms and existing Filipino conglomerates choose to enter the fintech sector will have a major impact on their growth and competitiveness. Universal banking licenses are available to fully foreign-owned banks that are established, reputable, financially sound, and willing to share banking technology. Domestic and foreign banks no longer require separate licenses and are subject to the same minimum capital requirement of $55 million to obtain a universal banking license. In 2020, the government approved the creation of a digital banking license that allows for full foreign ownership and entails a capital requirement of just $19 million, provided that the bank maintains a principal or headquarters in the Philippines. Six digital banks are licensed under this dedicated regime, but no new applications will be accepted until 2024. Expert advice from a partner with detailed knowledge of the application process will be a critical asset for any firm that wishes to obtain a license when the process reopens.



Quote for the day:

"Truly great leaders spend as much time collecting and acting upon feedback as they do providing it." -- Alexander Lucia

Daily Tech Digest - May 03, 2023

What You Need to Know About Neuromorphic Computing

Neuromorphic computing is a type of computer engineering that mimics the human brain and nervous system. “It's a hardware and software computing element that combines several specializations, such as biology, mathematics, electronics, and physics,” explains Abhishek Khandelwal, vice president, life sciences, at engineering consulting firm Capgemini Engineering. While current AI technology has become better at outperforming human capabilities in multiple fields, such as Level 4 self-driving vehicles and generative models, it still offers only a crude approximation of human/biological capabilities and is only useful in a handful of fields. ... Neuromorphic supporters believe the technology will lead to more intelligent systems. “Such systems could also learn automatically and self-regulate what to learn and where to learn from,” Natarajan says. Meanwhile, combining neuromorphic technology with neuro-prosthetics, (such as Neuralink) could lead to breakthroughs in prosthetic limb control and various other types of human assistive and augmented technologies.


How the influence of data and the metaverse will revolutionize businesses and industries

Today, business is all about data: collecting, storing, transforming, and analysing it to gain insights—to make decisions. Just like how ChatGPT requires massive amounts of data to create human-like language, businesses need data to augment human decision-making. From machine and building performance to energy and emissions, data is the crucial link between the physical and digital worlds. It’s also the key to solving efficiency and sustainability challenges that are now more urgent than ever. If the metaverse is meant to transform business and industries, it must be built on solid data foundations. ... Digital transformation started with connecting physical assets via IoT and edge controls. Its disruptive potential has proven to carry operational and energy efficiency across all levels of an enterprise. When we introduce powerful software capabilities and start leveraging the generated data, we can create virtual representations of the real world by combining simulation, augmented reality (AR), data sharing, and visualization all at once. 


Distributed Tracing Is Failing. How Can We Save It?

Engineers are to some degree creatures of habit. The engineering organizations I’ve spent time with have a deep level of comfort with dashboards, and statistics show that’s where engineers spend the most time — they provide data in an easy-to-understand graphical user interface (GUI) for engineers to quickly answer questions. However, it’s challenging when trace data is kept in its own silo. To access its value, an engineer must navigate away from their primary investigation to a separate place in the app — or worse, a separate app. Then the engineer must try to recreate whatever context they had when they determined that trace data could supplement the investigation. Over time, all but a few power users start to drift away from using the trace query page on a regular basis. Not because the trace query page is any less useful. It’s simply outside of the average engineer’s scope. It’s like a kitchen appliance with lots of uses when you’re cooking, but because it’s kept out of sight in the back of a drawer, you never think to use it — even if it’s the best tool for the job.


We’re Still in the ‘Wild West’ When it Comes to Data Governance, StreamSets Says

A lack of visibility into data pipelines raises the risk of other data security problems, the company says. “The research reveals that 48% of businesses can’t see when data is being used in multiple systems, and 40% cannot ensure data is being pulled from the best source,” it says. “Moreover, 54% cannot integrate pipelines with a data catalog, and 57% cannot integrate pipelines into a data fabric.” Who holds responsibility for cleaning up the data mess? Well, that’s another area with a bit of murkiness. About half (47%) of StreamSets survey respondents say the centralized IT team bears responsibility for managing the data. However, 18% said the line of business holds primary responsibility, while it’s split between the business and IT in 35% of cases. A second survey released by StreamSets last week highlights the difficulty in running data pipelines in the modern enterprise. Many companies have thousands of data pipelines in use and are hard pressed to build, manage, and maintain them at the pace required by the business, according to StreamSets.


Quantum computing: What are the data storage challenges?

One of the core challenges of quantum computers is that their storage systems are unsuitable for long-term storage due to quantum decoherence, the effect of which can build up over time. Decoherence occurs when quantum computing data is brought into existing data storage frameworks and causes qubits to lose their quantum status, resulting in corrupted data and data loss. “Quantum mechanical bits can’t be stored for long times as they tend to decay and collapse after a while,” says Weides. “Depending on the technology used, they can collapse within seconds, but the best ones are in a minute. You don’t really achieve 10 years of storage. ...” Quantum computers will need data storage during computation, but that needs to be a quantum memory for storing super-positioned or entangled states, and storage durations are going to present a challenge. So, it’s likely data storage for quantum computing will need to rely on conventional storage, such as in high-performance computing (HPC). Considering the massive financial investment required for quantum computing, to introduce a limitation of “cheap” data storage elements as a cost-saving exercise would be counter-productive.


7 speed bumps on the road to AI

There are many issues and debates that humans know to avoid in certain contexts, such as holiday dinners or the workplace. AIs, though, need to be taught how to handle such issues in every context. Some large language models are programmed to deflect loaded questions or just refuse to answer them, but some users simply won't let a sleeping dog lie. When such a user notices the AI dodging a tricky question, such as one that invokes racial or gender bias, they'll immediately look for ways to get under those guardrails. Bias in data and insufficient data are issues that can be corrected for over time, but in the meantime, the potential for mischief and misuse is huge. And, while getting AI to churn out hate speech is bad enough, the plot thickens considerably when we start using AI to explore the moral implications of real life decisions. Many AI projects depend on human feedback to guide their learning. Often, a project of scale needs a high volume of people to build the training set and adjust the model’s behavior as it grows. For many projects, the needed volume is only economically feasible if trainers are paid low wages in poor countries. 


7 ways to improve employee experience and workplace culture

The traditional hierarchical way of managing employees has been shown to be largely ineffective. Companies run as adhocracies are more productive as they foster knowledge sharing, workplace collaboration, and rapid adaptation—some of the most important attributes for companies in the knowledge-based age. By encouraging employees to be more self-sufficient and less dependent on their superiors, you can promote greater efficiency and effectiveness in the workplace. Start adopting more self-service options for employees. Modern IT and HR systems can be calibrated to your employees’ needs and enable them to help themselves, whether they want to book a vacation, access important documents, get a better screen, or access an enterprise app. Although hybrid and remote work seems to be the preferred model for many organizations, it still has disadvantages. Many remote and hybrid employees struggle to manage the blurred boundary between work and personal life, or the often less-than-ideal workplace setups.


What Does a Strong Agile Culture Look Like?

A strong culture is critical for Agile organizations to be successful. Agile requires organizations, and therefore its employees, to be ready to welcome changing requirements and inspect and adapt at any given moment. Teams are supposed to be self-managing and self-organizing. Stakeholders need to see working products frequently. Breaking that down, expectations are that projects change all the time but still need to be delivered in quick increments to stakeholders, all the while teams are managing themselves. ... Psychological safety in the workplace refers to the extent to which employees feel safe to speak up, share their ideas, and take risks without fear of negative consequences. It is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. When there is psychological safety in the workplace, employees are more likely to be engaged, motivated, and productive. They are also more likely to collaborate, share their knowledge and expertise, and contribute to innovation.


9 ways to avoid falling prey to AI washing

It’s not uncommon for a company to acquire dubious AI solutions, and in such situations, the CIO may not necessarily be at fault. It could be “a symptom of poor company leadership,” says Welch. “The business falls for marketing hype and overrules the IT team, which is left to pick up the pieces.” To prevent moments like these, organizations need to foster a collaborative culture in which the opinion of tech professionals is valued and their arguments are listed thoroughly. At the same time, CIOs and tech teams should build their reputation within the company so their opinion is more easily incorporated into decision-making processes. To achieve that, they should demonstrate expertise, professionalism, and soft skills. “I don’t feel there’s a problem with detecting AI washing for the CIO,” says Max Kovtun, chief innovation officer at Sigma Software Group. “The bigger problem might be the push from business stakeholders or entrepreneurs to use AI in any form because they want to look innovative and cutting edge. So the right question would be how not to become an AI washer under the pressure of entrepreneurship.”


Skilling up the security team for the AI-dominated era

The increasing reliance of AI and machine learning models in all technological walks of life is expected to rapidly change the complexion of the threat landscape. Meanwhile, organically training security staff, bringing in AI experts who can be trained to aid in security activities, and evangelizing the hardening of AI systems will all take considerable runway. Experts share what security leaders will need to shape their skill base and prepare to face both sides of growing AI risk: risk to AI systems and risks from AI-based attacks. There is some degree of crossover in each domain. For example, machine learning and data science skills are going to be increasingly relevant on both sides. In both cases existing security skills in penetration testing, threat modeling, threat hunting, security engineering, and security awareness training will be as important as ever, just in the context of new threats. However, the techniques needed to defend against AI and to protect AI from attack also have their own unique nuances, which will in turn influence the make-up of the teams called to execute on those strategies.



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - May 02, 2023

Is misinformation the newest malware?

"When we were thinking about the risks of Twitter being targeted by, let's say, the Russian government, we always had to recognize that there would be attempts to get into Twitter's systems and target the company and exfiltrate user data," Roth said. "There would be attempts to influence the conversations happening on the platforms, and there would be attempts to compromise the accounts of Twitter's users. There were multiple layers to each of these things. And Twitter as a company had a role to play in addressing that conduct across each one of those levels.” Roth pointed to the "great Twitter hack of 2020," when financially motivated people in their twenties compromised a Twitter employee's account to promote a crypto scam on high-profile accounts. This incident is an example of what he called the "illusory distinction" between malware and misinformation. "This was targeting Twitter's employees to gain access to Twitter's backend systems in order to carry out malicious activity propagated across the social network. You cannot think of these problems in isolation," Roth said.


Just Who Exactly Should Take Responsibility for Application Security?

We talk a great deal about shifting left and putting it on individuals. But if developers’ goals and incentives don’t include security, they won’t do it. Humans act in their own interests and unless their interests are made to be something different, they’re going to behave how they want to behave. If a company wants to secure code, it’s on them to put in place the standards, enforce the standards, and actually care and invest. Companies that don’t do those things will never be secure and are basically just setting up people to fail. Companies have to get their priorities right and invest in the tools and training that empowers developers to perform robust security. …But they do need to be engaged There are things that development managers can do to introduce more security in a reasonable way that doesn’t cost a ton of extra time and money. Importantly, they can lead by encouraging developers to take reasonable steps that will help. For instance, when introducing a new library, don’t introduce anything that’s got a known vulnerability, kind of a “do no harm” approach.


Why We Should Establish Guardrails For Artificial General Intelligence Now

Weizenbaum’s fears show that ethical concerns over computers’ capabilities are nothing new. As we enter the exciting age of AGI-led possibilities, perhaps we should take lessons from what happened with social media platforms. When applications like MySpace, Facebook and the like first launched, they were touted as a means to bring people together and enable self-expression through personal posts and photo sharing. The platforms’ intent was to connect people in a convenient, friendly way. What the platforms’ founders didn’t envision is that one day, these networks would bombard members with annoying advertisements that creepily follow them around. They didn’t worry that they were asking members to give their most personal details to large corporations or possibly even governments (e.g., TikTok). They didn’t expect that disinformation would interfere in elections or that children would be bullied or view harmful content. As a result, the operations of these social platforms are now under question and they might face government regulation if they can’t gain control over content and data privacy.


Your decommissioned routers could be a security disaster

Often, they included network locations and some revealed cloud applications hosted in specific remote data centers, “complete with which ports or controlled-access mechanisms were used to access them, and from which source networks.” Additionally, they found firewall rules used to block or allow certain access from certain networks. Often specifics about the times of day they could be accessed were available as well. “With this level of detail, impersonating network or internal hosts would be far simpler for an attacker, especially since the devices often contain VPN credentials or other easily cracked authentication tokens,” according to the white paper. The routers—four Cisco ASA 5500 Series, three Fortinet Fortigate Series, and 11 Juniper Networks SRX Series Service Gateways—were all bought legally through used-equipment vendors, according to the paper. “No procedures or tools of a primarily forensic or data-recovery nature were ever employed, nor were any techniques that required opening the routers’ cases,” yet the researchers said they were able to recover data that would be “a treasure trove for a potential adversary—for both technical and social-engineering attacks.”


5 surefire ways to derail a digital transformation (without knowing it)

Digital transformations can start with one initiative, defined goals, and a dedicated team. But CIOs are under pressure to accelerate and find digital transformation force multipliers. That means growing the number of leaders and teams that can plan innovations and deliver transformative impacts. “Innovation does not happen in isolation: It occurs when organizations encourage and nurture it, often with processes to enable nontraditional ways of thinking, working, and the space to try out ideas in a safe environment,” says Hasmukh Ranjan, CIO of AMD. Here’s how I spot derailments: Ask initiative leaders to share access to their roadmaps, agile backlogs, collaboration tools, stakeholder communications, and internal documentation. ... Subject matter experts and internal stakeholders should be contributors to priorities and requirements, not decision-makers or backlog dictators. Digital transformations derail when CIOs miss the opportunity to establish and communicate product management responsibilities for creating and evolving market- and customer-driven roadmaps.


IS Audit in Practice: Advantages of Technology in Achieving Diversity

The benefits of diversity have long been sought after by schools of management. Diverse styles produce a broad range of ideas and approaches, which can translate to a more cohesive work environment and create a competitive edge that impacts the bottom line. Diverse work teams with inclusive mindsets can bridge gaps in understanding that help avoid rework. The classic example is strong collaboration between IT and the business, where post-development user acceptance testing (UAT) produces a go-live outcome that satisfies users. Diverse teams also make it easier to reach a wider audience by creating products and services that are broadly appealing. Technology helps make these products and services more ubiquitous. If diversity can bring such advantages, why is it so hard to achieve? The terms "unconscious bias," "the boys’ club," "cliques" and "the inner circle" suggest that work and social groups form around what is familiar. ... Breaking away from the known and comfortable to include new approaches and different individuals can feel risky, as any change does for those accustomed to operating within established boundaries.


The role of AI as an everyday life assistant

One of the concerns the book raises is how businesses experienced in selling to humans will respond. There is no reason to assume the machine will remain in the domain of low-value purchasing, leaving businesses free to focus their efforts on high-value human customers. “Doubling down on the human market and perceived higher-value human customer service capabilities, the losers will find their cost of sale gradually increasing even as their revenue and total addressable market appears to shrink,” warn Raskino and co-author Don Scheibenreif. Society may not yet be ready for the machine customer, but the idea is finding its way into people’s lives by automating boring or repetitive tasks. In the book, Raskino and Scheibenreif discuss the May 2018 demonstration by Google CEO Sundar Pichai of an AI assistant called Duplex. The AI was so convincing that it was able to book an appointment at a hair salon over the telephone, without the person on the other end of the line being aware that it was a machine making the appointment.


Data infrastructure: The picks and shovels of the AI gold rush

While AI models form the cornerstone of this recent progress, scaling AI requires a robust data foundation that trains models and serves them effectively. This process involves collecting and storing raw data, utilizing computational power to transform data and train models, and processing and ingesting data in real-time for inference. Ultimately, turning raw data into AI insights in production is complex and dependent on having strong data infrastructure. Data engineering teams will play a crucial role in enabling AI and must lean into an ever-improving set of tools to address rapidly growing volumes of data, larger models, and the need for real-time processing and movement of data. Data infrastructure has transformed over the past decade irrespective of AI, driven by the shift to the cloud and a greater focus on analytics. This transformation has created huge commercial successes with the likes of Snowflake, Databricks, Confluent, Elastic, MongoDB, and others. Today, we are in a moment in time where storage and compute limitations have largely been erased thanks to the cloud.


Why platform engineering?

While simple in concept, platform engineering isn’t trivial to execute because it requires a product development mindset. Platform engineers must develop a product that agile development teams want to consume, and developers must let go of their desires for DIY (do it yourself) devops approaches. One place to start is infrastructure and cloud provisioning, where IT can benefit significantly from standards, and developers are less likely to have application-specific architectural requirements. Donnie Berkholz, senior vice president of product management at Percona, says, “Platform engineering covers how teams can deliver the right kind of developer experience using automation and self-service, so developers can get to writing code and implementing applications rather than having to wait for infrastructure to be set up based on a ticket request.” Therein lies the customer pain point. If I am a developer or data scientist who wants to code, the last thing I want to do is open a ticket for computing resources. But IT and security leaders also want to avoid having developers customize the infrastructure’s configuration, which can be costly and create security vulnerabilities.


3 Ways To Manage Conflict In The Workplace

If you’re experiencing a conflict, you might spend some time digging into all the possible root causes of the conflict you’re currently dealing with that may be different from your initial perception. In writing down the possibilities or alternatives, you just might find that the conflict you thought you were struggling with isn’t what the conflict is actually about. This is an exercise to get to the heart of the matter, because we can’t solve for what we don’t even realize exists. ... Justification is often what keeps us stuck in conflict, according to conflict and collaboration consultant Cair Canfield. Conflict can keep us stuck if our egos want us to remain blameless, like we don’t have any part in the problem and so we don’t have to change. But it doesn’t really serve you, because you’ll keep doing the same thing in the same way, rather than be able to move forward productively. ... nstead of immediately shutting down an idea because you disagree with it, ask questions. You might ask, ‘What in your life has shaped your viewpoint?’ Being curious about why the other person sees things the way they do helps your brain to stay open to new information, while being defensive can make you less open minded.



Quote for the day:

"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God" -- Henry Kissinger

Daily Tech Digest - April 30, 2023

AI for security is here. Now we need security for AI

As the mass adoption and application of AI are still fairly new, the security of AI is not yet well understood. In March 2023, the European Union Agency for Cybersecurity (ENISA) published a document titled Cybersecurity of AI and Standardisation with the intent to “provide an overview of standards (existing, being drafted, under consideration and planned) related to the cybersecurity of AI, assess their coverage and identify gaps” in standardization. Because the EU likes compliance, the focus of this document is on standards and regulations, not on practical recommendations for security leaders and practitioners. There is a lot about the problem of AI security online, although it looks significantly less compared to the topic of using AI for cyber defense and offense. Many might argue that AI security can be tackled by getting people and tools from several disciplines including data, software and cloud security to work together, but there is a strong case to be made for a distinct specialization. When it comes to the vendor landscape, I would categorize AI/ML security as an emerging field. The summary that follows provides a brief overview of vendors in this space.


Enterprises Die for Domain Expertise Over New Technologies

Domain expertise is important to build a complete ecosystem that can scale. This can help businesses leverage relevant knowledge and datasets to develop custom solutions. This is why enterprises look for enablers that can bring in the domain expertise for particular use cases. ... One of the challenges that companies encounter today is how to utilise data effectively as per their business needs. According to a global survey conducted by Oracle and Seth Stephens-Davidowitz, 91% of respondents in India reported a ten-fold increase in the number of decisions they make every day over the past three years. As individuals attempt to navigate this increased decision-making, 90% reported being inundated with more data from various sources than ever before. “Some interesting findings we came across was that respondents who wanted technological assistance also said that the technology should know its workflow and what it is trying to accomplish,” Joey Fitts, vice president, Analytics Product Strategy, Oracle told ET.


Amazon’s quiet open source revolution

Let’s remember that the open source spadework is not done. For example, AWS makes a lot of money from its Kubernetes service but still barely scrapes into the top 10 contributors for the past year. The same is true for other banner open source projects that AWS has managed services for, such as OpenTelemetry, or projects its customers depend on, such as Knative (AWS comes in at #12). What about Apache Hadoop, the foundation for AWS Elastic MapReduce? AWS has just one committer. For Apache Airflow, the numbers are better. This is glass-half-empty thinking, anyway. The fact that AWS has any committers to these projects is an important indicator that the company is changing. A few years back, there would have been zero committers to these projects. Now there are one or many. All of this signals a different destination for AWS. The company has always been great at running open source projects as services for its customers. As I found while working there, most customers just want something that works. But getting it to “just work” in the way customers want requires that AWS get its hands dirty in the development of the project.


Response and resilience in operational-risk events

The findings have several urgent implications for leaders as they think about the overall resilience of their institutions, how to minimize the risk of such events occurring, and how to respond when crises do hit. The findings strongly suggest that broad market forces and industry dynamics can magnify adverse effects. Effective crisis and mitigation planning has to take account of these factors. Experience supports this view. In the not-so-distant past, especially before the financial crisis of 2008–09, many companies approached operational-risk measures from a regulatory perspective, with an economy of effort, if not formalistically. Incurring costs and paying fines for unforeseen breaches and events were accordingly counted as the cost of doing business. Amid crises, furthermore, communications were sometimes aimed at minimizing true losses—an approach that risked a damaging cycle of upward revisions. The present environment, however, is unforgiving of such approaches. An accelerated pace of change, especially in digitization and social media, magnifies the negative effects of missteps in the aftermath of crisis events. 


Developers Need a Community of Practice — and Wikis Still Work

This subject has flattened out a bit since the pandemic, after which fewer developers worked next to each other and keeping remote members connected is more the norm. A good Community of Practice should just look like a private Stack Overflow, with discussions on topics of concern to devs across the organization. This applies to most organizations that have siloed teams. If you are part of a one-team company, then a CoP should not be something you need right now — just be ready to be proactive when you are part of a bigger setup. The first seeds are usually sown when “best practice” is discussed, and managers realize that there is no point in having just one team getting things right. This is the time to establish a developer CoP, before something awkward gets imposed from above. The topics are often the complications that an organization stubbornly brings to existing tech; like understanding arcane branching policies, or working with an old version of software because it is the only sanctioned version, etc. 


Five Leadership Mindsets For Navigating Organizational Complexity: Rethinking Chaos And Opportunity

The world is unlikely to suddenly settle down. With that in mind, the context around chaotic moments changes. It’s no longer about just dealing with what’s in front of you; it’s about writing the script for the team to respond to future disruptions. So don’t just deal with it as a leader. Start viewing disruptions as valuable learning experiences that build resilience and adaptability within your organization. And once you have navigated through, take a moment to create a playbook for the future. Use retrospection with your team to find out the specific things that worked and the things that didn’t. ... “I don’t deal well with change” is a bad personal strategy, and I recommend that you drop any ideas that adaptability is an innate trait possessed only by a select few. With that said, I've found that learning requires experience. Social and business safety nets are key, so employees can learn with less fear. Encourage your employees to challenge their comfort zones, experiment with new approaches and learn from setbacks to develop the skills and strategies necessary for navigating change effectively.


Why Don’t We Have 128-Bit Computers Yet?

It’s practically impossible to predict the future of computing, but there are a few reasons why 128-bit computers may never be needed:Diminishing returns: As a processor’s bit size increases, the performance and capabilities improvements tend to become less significant. In other words, the improvement from 64- to 128- bits isn’t anywhere as dramatic as going from 8-bit to 16-bit CPUs, for example. Alternative solutions: There may be alternative ways to address the need for increased processing power and memory addressability, such as using multiple processors or specialized hardware rather than a single, large processor with a high bit size. Physical limitations: It may turn out to be impossible to create a complex modern 128-bit processor due to technological or material constraints. Cost and resources: Developing and manufacturing 128-bit processors could be cost-prohibitive and resource-intensive, making mass production unprofitable. While it’s true that the benefits of moving from 64-bit to 128-bit might not be worth it today, new applications or technologies might emerge in the future that could push the development of 128-bit processors.


Secret CSO: Rani Kehat, Radiflow

The success of your cybersecurity is difficult to measure. For example, many believe that if you haven’t been hacked, your cybersecurity efforts must be working. This isn’t the case – it may well be that you just haven’t been hacked yet. Thankfully, there are methods to measure how well security practices are working; effectiveness of controls, corporate awareness and reporting of suspicious events, and mitigation RPO are among the most helpful here. ... API security is the best. APIs have become integral to programming web-based interactions, which means hackers have zeroed in on them as a key target. Zero Trust, on the other hand, has become a buzzword that in theory should reduce vulnerabilities but in reality is not practical to implement, slows down application performance, and hampers productivity. ... To get formal professional certifications. Not only have these helped advance my career at every stage, but they have also ensured that my security knowledge remains up to date against constantly developing hacker tactics and techniques.


How blockchain technology is paving the way for a new era of cloud computing

“Blockchain itself can be used within a private ‘walled garden’ as well,” Ian Foley, chief business officer at data storage blockchain firm Arweave, told VentureBeat. “It is a technology structure that brings immutability and maintains data provenance. Centralized cloud vendors are also developing blockchain solutions, but they lack the benefits of decentralization. Decentralized cloud infrastructures are always independent of centralized environments, enabling enterprises and individuals to access everything they’ve stored without going through a specific application.” Decentralized storage platforms use the power of blockchain technology to offer transparency and verifiable proof for data storage, consumption and reliability through cryptography. This eliminates the need for a centralized provider and gives users greater control over their data. With decentralized storage, data is stored in a wide peer-to-peer (P2P) network, offering transfer speeds that are generally faster than traditional centralized storage systems.


A True Leader Doesn't Just Talk the Talk — They Walk the Walk. Here's How to Lead from the Front.

So, what does "walking the walk" look like? Depending on their position within the company, it looks different for everyone. Let's say you're the CEO of a company. To provide valuable insights and opinions, you need to be proficient in the product you're selling and stay current on industry trends and news. If you were managing a customer service team, what would you do? Participating in difficult conversations can help your team understand what's expected of them. As a leader, it is imperative that you set an excellent example for your team members and achieve results. Leaders must lead by example and practice what they preach. Talking about honesty, integrity and accountability is easy, but it's much harder to embody them daily. Regarding work-life balance, taking time off and setting boundaries are essential. You must cultivate a culture of listening to your team to foster a culture of open communication.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - April 29, 2023

When will the fascination with cloud computing fade?

This is a fair question considering other technology trends in the IT industry, such as client/server, enterprise application integration, business-to-business, distributed objects, service-oriented architecture, and then cloud computing. I gladly rode most of these waves. All these concepts still exist, perhaps in larger scale than when public interest was hot. However, they are not discussed as much these days since other technology trends grab more headlines, such as cloud computing and artificial intelligence. So, should the future hold more interest in cloud computing, less interest, or about the same? On one hand, cloud computing is becoming more standardized, and, dare I say, commoditized, with most cloud providers offering similar services and capabilities as their competitors. This means businesses no longer need to spend as much time and resources evaluating different providers. Back in the day, I spent a good deal of time talking about the advantages of cloud storage on one provider over another. 


Gen Z mental health: The impact of tech and social media

Younger generations tend to engage with social media regularly, in both active and passive ways. Almost half of both millennial and Gen Z respondents check social media multiple times a day. Over one-third of Gen Z respondents say they spend more than two hours each day on social media sites; however, millennials are the most active social media users, with 32 percent stating they post either daily or multiple times a day. Whether less active social media use by Gen Z respondents could be related to greater caution and self-awareness among youth, reluctance to commit, or more comfort with passive social media use remains up for debate. ... Across generations, there are more positive than negative impacts reported by respondents; however, the reported negative impact is higher for Gen Z. Respondents from high-income countries (as defined by World Bank) were twice as likely to report a negative impact of social media on their lives than respondents from lower-middle-income countries.


Implementing SLOs-as-Code: A Case Study

Procore’s Observability team has designed our SLO-as-code approach to scale with Procore’s growing number of teams and services. Choosing YAML as the source of truth allows Procore a scalable approach for the company through centralized automation. Following the examples put forth by openslo.com and embracing a ubiquitous language like YAML helps avoid adding the complexities of Terraform for development teams and is easier to embed in every team’s directories. We used a GitOps approach to infrastructure-as-code (IaC) to create and maintain our Nobl9 resources. The Nobl9 resources can be defined as YAML configuration (config) files. In particular, one can declaratively define a resource’s properties (in the config file) and have a tool read and process that into a live and running resource. It’s important to draw a distinction between the resource and its configuration, as we’ll be discussing both throughout this article. All resources, from projects (the primary grouping of resources in Nobl9) to SLOs, can be defined through YAML. 


Data Management: Light At The End Of The Tunnel?

Bansal explains that the answer to the question about the technologies firms can use to manage their data better from inception to final consumption and analysis should be prefaced by a clear understanding of the type of operating model they are looking to establish—centralized, regional or local. It should also be answered, he says, in the context of the data value chain, the first step of which entails cataloging data with the business definitions at inception to ensure it is centrally located and discoverable to anyone across the business. Firms also need to use the right data standards to ensure standardization across the firm and their business units. Then, with respect to distribution, they need to establish a clear understanding of how they will send that data to the various market participants they interact with, as well as internal consumers or even regulators. “That’s where application programming interfaces [APIs] come in, but it’s not a one-size-fits-all,” he says. “It’s a common understanding that APIs are the most feasible way of transferring data, but APIs without common data standards do not work.”


Three Soft Skills Leaders Should Look For When They're Recruiting

Adaptability and flexibility have always been valuable soft skills. But given the experiences we’ve all lived through over the last two-odd years, I think it’s safe to say they’ve never been more important. The continued uncertainty of today’s economic environment makes them two competencies you definitely want your employees to possess. Given what the business world can dish out, you should assess whether a candidate has what it takes to adapt to changing situations before you hire them. Adaptability and flexibility are essential components of problem-solving, time management, critical thinking and leadership.  To gauge whether an interviewee has these two qualities, have them tell you about changes they made during the pandemic. What kind of direction and support did they receive from their employers? What did they think they needed but didn’t get? And how did they continue to work without whatever they lacked? Explore the candidate’s response to learning new technologies, handling last-minute client changes and sticking to a project timeline when something’s handed off to them late.


5 Benefits of Enterprise Architecture Tools

Enterprise architecture tools provide valuable insights for strategic planning and decision-making. By capturing and analyzing data on IT assets, processes, and interdependencies, these tools enable organizations to assess the impact of proposed changes or investments on their IT landscape. This helps in identifying potential risks and opportunities, evaluating different scenarios, and making informed decisions about IT investments, initiatives, and resource allocations. With improved strategic planning, organizations can prioritize IT projects, optimize their technology investments, and align their IT roadmap with their business objectives. Collaboration is crucial for effective IT landscape management, and enterprise architecture tools facilitate collaboration among different stakeholders, including IT teams, business units, and executives. These tools provide a centralized platform for sharing and accessing IT-related information, documentation, and visualizations. This promotes cross-functional collaboration, enables effective communication, and ensures that all stakeholders are on the same page when it comes to the organization’s IT landscape. 


Underutilized SaaS Apps Costing Businesses Millions

Managing a SaaS portfolio is truly a team sport and requires stakeholders from across the organization — but specifically IT and finance teams are most directly involved, driving the charge, Pippenger said. "For both of these teams, having a complete picture of all SaaS applications in use is crucial," he said. "It provides IT the information they need to mitigate risks, strengthen the organization's security posture, and maximize adoption." Improved visibility provides finance teams with the information they need to properly forecast and budget for software in the future and identify opportunities for cost savings. "Both of these groups need to partner with business units and employees who are purchasing software to understand use cases, ensure that the software being purchased is necessary, and align to the organization's holistic application strategy," he said. ... Another proven method to reduce software bloat is to rationalize the SaaS portfolio, Pippenger said. "We see a lot of redundancy, especially in categories like online training, team collaboration, project management, file storage and sharing," he said.


Elevate Your Decision-Making: The Impact of Observability on Business Success

In the business world, as complexity grows, finding the answer to “why” becomes important. And in the world of computing, observability is about answering “why something is happening this way.” The advanced tools of observability act as the heart of your environment, giving you enough context to debug the issues and prevent the systems from business outages, if followed with the best practices. But how can observability serve as the catalyst for your organization’s growth? According to The Observability Market and Gartner’s report, enterprises will increase their adoption rate of observability tools by 30% by 2024. In recent years, the emergence of technologies such as big data, artificial intelligence, and machine learning has accelerated the adoption of observability practices in organizations. Harnessing these advanced tools (to name a few top open-source observability tools – Prometheus, Grafana, OpenTelemetry) empowers organizations to become more agile and responsive in their decision-making processes. 


How Cybersecurity Leaders Can Capitalize on Cloud and Data Governance Synergy

In today’s modern organizations, explosive amounts of digital information are being used to drive business decisions and activities. However, both organizations and individuals may not have the necessary tools and resources to effectively carry out data governance at a large scale. I’ve experienced this scenario in both large private and public sector organizations: trying to wrangle data in complex environments with multiple stakeholders, systems, and settings. It often leads to incomplete inventories of systems and their data, along with who has access to it and why. Cloud-native services, automation, and innovation enable organizations to address these challenges as part of their broader data governance strategies and under the auspices of cloud governance and security. Many IaaS hyperscale cloud service providers offer native services to enable activities such as data loss protection (DLP). For example, AWS Macie automates the discovery of sensitive data, provides cost-efficient visibility, and helps mitigate the threats of unauthorized data access and exfiltration.


How Not to Use the DORA Metrics to Measure DevOps Performance

Part of the beauty of DevOps is that it doesn't pit velocity and resilience against each other but makes them mutually beneficial. For example, frequent small releases with incremental improvements can more easily be rolled back if there's an error. Or, if a bug is easy to identify and fix, your team can roll forward and remediate it quickly. Yet again, we can see that the DORA metrics are complementary; success in one area typically correlates with success across others. However, driving success with this metric can be an anti-pattern - it can unhelpfully conceal other problems. For example, if your strategy to recover a service is always to roll back, then you’ll be taking value from your latest release away from your users, even those that don’t encounter your new-found issue. While your mean time to recover will be low, your lead time figure may now be skewed and not account for this rollback strategy, giving you a false sense of agility. Perhaps looking at what it would take to always be able to roll forward is the next step on your journey to refine your software delivery process.



Quote for the day:

"A little more persistence, a little more effort, and what seemed hopeless failure may turn to glorious success." -- Elbert Hubbard