Daily Tech Digest - April 30, 2023

AI for security is here. Now we need security for AI

As the mass adoption and application of AI are still fairly new, the security of AI is not yet well understood. In March 2023, the European Union Agency for Cybersecurity (ENISA) published a document titled Cybersecurity of AI and Standardisation with the intent to “provide an overview of standards (existing, being drafted, under consideration and planned) related to the cybersecurity of AI, assess their coverage and identify gaps” in standardization. Because the EU likes compliance, the focus of this document is on standards and regulations, not on practical recommendations for security leaders and practitioners. There is a lot about the problem of AI security online, although it looks significantly less compared to the topic of using AI for cyber defense and offense. Many might argue that AI security can be tackled by getting people and tools from several disciplines including data, software and cloud security to work together, but there is a strong case to be made for a distinct specialization. When it comes to the vendor landscape, I would categorize AI/ML security as an emerging field. The summary that follows provides a brief overview of vendors in this space.

Enterprises Die for Domain Expertise Over New Technologies

Domain expertise is important to build a complete ecosystem that can scale. This can help businesses leverage relevant knowledge and datasets to develop custom solutions. This is why enterprises look for enablers that can bring in the domain expertise for particular use cases. ... One of the challenges that companies encounter today is how to utilise data effectively as per their business needs. According to a global survey conducted by Oracle and Seth Stephens-Davidowitz, 91% of respondents in India reported a ten-fold increase in the number of decisions they make every day over the past three years. As individuals attempt to navigate this increased decision-making, 90% reported being inundated with more data from various sources than ever before. “Some interesting findings we came across was that respondents who wanted technological assistance also said that the technology should know its workflow and what it is trying to accomplish,” Joey Fitts, vice president, Analytics Product Strategy, Oracle told ET.

Amazon’s quiet open source revolution

Let’s remember that the open source spadework is not done. For example, AWS makes a lot of money from its Kubernetes service but still barely scrapes into the top 10 contributors for the past year. The same is true for other banner open source projects that AWS has managed services for, such as OpenTelemetry, or projects its customers depend on, such as Knative (AWS comes in at #12). What about Apache Hadoop, the foundation for AWS Elastic MapReduce? AWS has just one committer. For Apache Airflow, the numbers are better. This is glass-half-empty thinking, anyway. The fact that AWS has any committers to these projects is an important indicator that the company is changing. A few years back, there would have been zero committers to these projects. Now there are one or many. All of this signals a different destination for AWS. The company has always been great at running open source projects as services for its customers. As I found while working there, most customers just want something that works. But getting it to “just work” in the way customers want requires that AWS get its hands dirty in the development of the project.

Response and resilience in operational-risk events

The findings have several urgent implications for leaders as they think about the overall resilience of their institutions, how to minimize the risk of such events occurring, and how to respond when crises do hit. The findings strongly suggest that broad market forces and industry dynamics can magnify adverse effects. Effective crisis and mitigation planning has to take account of these factors. Experience supports this view. In the not-so-distant past, especially before the financial crisis of 2008–09, many companies approached operational-risk measures from a regulatory perspective, with an economy of effort, if not formalistically. Incurring costs and paying fines for unforeseen breaches and events were accordingly counted as the cost of doing business. Amid crises, furthermore, communications were sometimes aimed at minimizing true losses—an approach that risked a damaging cycle of upward revisions. The present environment, however, is unforgiving of such approaches. An accelerated pace of change, especially in digitization and social media, magnifies the negative effects of missteps in the aftermath of crisis events. 

Developers Need a Community of Practice — and Wikis Still Work

This subject has flattened out a bit since the pandemic, after which fewer developers worked next to each other and keeping remote members connected is more the norm. A good Community of Practice should just look like a private Stack Overflow, with discussions on topics of concern to devs across the organization. This applies to most organizations that have siloed teams. If you are part of a one-team company, then a CoP should not be something you need right now — just be ready to be proactive when you are part of a bigger setup. The first seeds are usually sown when “best practice” is discussed, and managers realize that there is no point in having just one team getting things right. This is the time to establish a developer CoP, before something awkward gets imposed from above. The topics are often the complications that an organization stubbornly brings to existing tech; like understanding arcane branching policies, or working with an old version of software because it is the only sanctioned version, etc. 

Five Leadership Mindsets For Navigating Organizational Complexity: Rethinking Chaos And Opportunity

The world is unlikely to suddenly settle down. With that in mind, the context around chaotic moments changes. It’s no longer about just dealing with what’s in front of you; it’s about writing the script for the team to respond to future disruptions. So don’t just deal with it as a leader. Start viewing disruptions as valuable learning experiences that build resilience and adaptability within your organization. And once you have navigated through, take a moment to create a playbook for the future. Use retrospection with your team to find out the specific things that worked and the things that didn’t. ... “I don’t deal well with change” is a bad personal strategy, and I recommend that you drop any ideas that adaptability is an innate trait possessed only by a select few. With that said, I've found that learning requires experience. Social and business safety nets are key, so employees can learn with less fear. Encourage your employees to challenge their comfort zones, experiment with new approaches and learn from setbacks to develop the skills and strategies necessary for navigating change effectively.

Why Don’t We Have 128-Bit Computers Yet?

It’s practically impossible to predict the future of computing, but there are a few reasons why 128-bit computers may never be needed:Diminishing returns: As a processor’s bit size increases, the performance and capabilities improvements tend to become less significant. In other words, the improvement from 64- to 128- bits isn’t anywhere as dramatic as going from 8-bit to 16-bit CPUs, for example. Alternative solutions: There may be alternative ways to address the need for increased processing power and memory addressability, such as using multiple processors or specialized hardware rather than a single, large processor with a high bit size. Physical limitations: It may turn out to be impossible to create a complex modern 128-bit processor due to technological or material constraints. Cost and resources: Developing and manufacturing 128-bit processors could be cost-prohibitive and resource-intensive, making mass production unprofitable. While it’s true that the benefits of moving from 64-bit to 128-bit might not be worth it today, new applications or technologies might emerge in the future that could push the development of 128-bit processors.

Secret CSO: Rani Kehat, Radiflow

The success of your cybersecurity is difficult to measure. For example, many believe that if you haven’t been hacked, your cybersecurity efforts must be working. This isn’t the case – it may well be that you just haven’t been hacked yet. Thankfully, there are methods to measure how well security practices are working; effectiveness of controls, corporate awareness and reporting of suspicious events, and mitigation RPO are among the most helpful here. ... API security is the best. APIs have become integral to programming web-based interactions, which means hackers have zeroed in on them as a key target. Zero Trust, on the other hand, has become a buzzword that in theory should reduce vulnerabilities but in reality is not practical to implement, slows down application performance, and hampers productivity. ... To get formal professional certifications. Not only have these helped advance my career at every stage, but they have also ensured that my security knowledge remains up to date against constantly developing hacker tactics and techniques.

How blockchain technology is paving the way for a new era of cloud computing

“Blockchain itself can be used within a private ‘walled garden’ as well,” Ian Foley, chief business officer at data storage blockchain firm Arweave, told VentureBeat. “It is a technology structure that brings immutability and maintains data provenance. Centralized cloud vendors are also developing blockchain solutions, but they lack the benefits of decentralization. Decentralized cloud infrastructures are always independent of centralized environments, enabling enterprises and individuals to access everything they’ve stored without going through a specific application.” Decentralized storage platforms use the power of blockchain technology to offer transparency and verifiable proof for data storage, consumption and reliability through cryptography. This eliminates the need for a centralized provider and gives users greater control over their data. With decentralized storage, data is stored in a wide peer-to-peer (P2P) network, offering transfer speeds that are generally faster than traditional centralized storage systems.

A True Leader Doesn't Just Talk the Talk — They Walk the Walk. Here's How to Lead from the Front.

So, what does "walking the walk" look like? Depending on their position within the company, it looks different for everyone. Let's say you're the CEO of a company. To provide valuable insights and opinions, you need to be proficient in the product you're selling and stay current on industry trends and news. If you were managing a customer service team, what would you do? Participating in difficult conversations can help your team understand what's expected of them. As a leader, it is imperative that you set an excellent example for your team members and achieve results. Leaders must lead by example and practice what they preach. Talking about honesty, integrity and accountability is easy, but it's much harder to embody them daily. Regarding work-life balance, taking time off and setting boundaries are essential. You must cultivate a culture of listening to your team to foster a culture of open communication.

Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - April 29, 2023

When will the fascination with cloud computing fade?

This is a fair question considering other technology trends in the IT industry, such as client/server, enterprise application integration, business-to-business, distributed objects, service-oriented architecture, and then cloud computing. I gladly rode most of these waves. All these concepts still exist, perhaps in larger scale than when public interest was hot. However, they are not discussed as much these days since other technology trends grab more headlines, such as cloud computing and artificial intelligence. So, should the future hold more interest in cloud computing, less interest, or about the same? On one hand, cloud computing is becoming more standardized, and, dare I say, commoditized, with most cloud providers offering similar services and capabilities as their competitors. This means businesses no longer need to spend as much time and resources evaluating different providers. Back in the day, I spent a good deal of time talking about the advantages of cloud storage on one provider over another. 

Gen Z mental health: The impact of tech and social media

Younger generations tend to engage with social media regularly, in both active and passive ways. Almost half of both millennial and Gen Z respondents check social media multiple times a day. Over one-third of Gen Z respondents say they spend more than two hours each day on social media sites; however, millennials are the most active social media users, with 32 percent stating they post either daily or multiple times a day. Whether less active social media use by Gen Z respondents could be related to greater caution and self-awareness among youth, reluctance to commit, or more comfort with passive social media use remains up for debate. ... Across generations, there are more positive than negative impacts reported by respondents; however, the reported negative impact is higher for Gen Z. Respondents from high-income countries (as defined by World Bank) were twice as likely to report a negative impact of social media on their lives than respondents from lower-middle-income countries.

Implementing SLOs-as-Code: A Case Study

Procore’s Observability team has designed our SLO-as-code approach to scale with Procore’s growing number of teams and services. Choosing YAML as the source of truth allows Procore a scalable approach for the company through centralized automation. Following the examples put forth by openslo.com and embracing a ubiquitous language like YAML helps avoid adding the complexities of Terraform for development teams and is easier to embed in every team’s directories. We used a GitOps approach to infrastructure-as-code (IaC) to create and maintain our Nobl9 resources. The Nobl9 resources can be defined as YAML configuration (config) files. In particular, one can declaratively define a resource’s properties (in the config file) and have a tool read and process that into a live and running resource. It’s important to draw a distinction between the resource and its configuration, as we’ll be discussing both throughout this article. All resources, from projects (the primary grouping of resources in Nobl9) to SLOs, can be defined through YAML. 

Data Management: Light At The End Of The Tunnel?

Bansal explains that the answer to the question about the technologies firms can use to manage their data better from inception to final consumption and analysis should be prefaced by a clear understanding of the type of operating model they are looking to establish—centralized, regional or local. It should also be answered, he says, in the context of the data value chain, the first step of which entails cataloging data with the business definitions at inception to ensure it is centrally located and discoverable to anyone across the business. Firms also need to use the right data standards to ensure standardization across the firm and their business units. Then, with respect to distribution, they need to establish a clear understanding of how they will send that data to the various market participants they interact with, as well as internal consumers or even regulators. “That’s where application programming interfaces [APIs] come in, but it’s not a one-size-fits-all,” he says. “It’s a common understanding that APIs are the most feasible way of transferring data, but APIs without common data standards do not work.”

Three Soft Skills Leaders Should Look For When They're Recruiting

Adaptability and flexibility have always been valuable soft skills. But given the experiences we’ve all lived through over the last two-odd years, I think it’s safe to say they’ve never been more important. The continued uncertainty of today’s economic environment makes them two competencies you definitely want your employees to possess. Given what the business world can dish out, you should assess whether a candidate has what it takes to adapt to changing situations before you hire them. Adaptability and flexibility are essential components of problem-solving, time management, critical thinking and leadership.  To gauge whether an interviewee has these two qualities, have them tell you about changes they made during the pandemic. What kind of direction and support did they receive from their employers? What did they think they needed but didn’t get? And how did they continue to work without whatever they lacked? Explore the candidate’s response to learning new technologies, handling last-minute client changes and sticking to a project timeline when something’s handed off to them late.

5 Benefits of Enterprise Architecture Tools

Enterprise architecture tools provide valuable insights for strategic planning and decision-making. By capturing and analyzing data on IT assets, processes, and interdependencies, these tools enable organizations to assess the impact of proposed changes or investments on their IT landscape. This helps in identifying potential risks and opportunities, evaluating different scenarios, and making informed decisions about IT investments, initiatives, and resource allocations. With improved strategic planning, organizations can prioritize IT projects, optimize their technology investments, and align their IT roadmap with their business objectives. Collaboration is crucial for effective IT landscape management, and enterprise architecture tools facilitate collaboration among different stakeholders, including IT teams, business units, and executives. These tools provide a centralized platform for sharing and accessing IT-related information, documentation, and visualizations. This promotes cross-functional collaboration, enables effective communication, and ensures that all stakeholders are on the same page when it comes to the organization’s IT landscape. 

Underutilized SaaS Apps Costing Businesses Millions

Managing a SaaS portfolio is truly a team sport and requires stakeholders from across the organization — but specifically IT and finance teams are most directly involved, driving the charge, Pippenger said. "For both of these teams, having a complete picture of all SaaS applications in use is crucial," he said. "It provides IT the information they need to mitigate risks, strengthen the organization's security posture, and maximize adoption." Improved visibility provides finance teams with the information they need to properly forecast and budget for software in the future and identify opportunities for cost savings. "Both of these groups need to partner with business units and employees who are purchasing software to understand use cases, ensure that the software being purchased is necessary, and align to the organization's holistic application strategy," he said. ... Another proven method to reduce software bloat is to rationalize the SaaS portfolio, Pippenger said. "We see a lot of redundancy, especially in categories like online training, team collaboration, project management, file storage and sharing," he said.

Elevate Your Decision-Making: The Impact of Observability on Business Success

In the business world, as complexity grows, finding the answer to “why” becomes important. And in the world of computing, observability is about answering “why something is happening this way.” The advanced tools of observability act as the heart of your environment, giving you enough context to debug the issues and prevent the systems from business outages, if followed with the best practices. But how can observability serve as the catalyst for your organization’s growth? According to The Observability Market and Gartner’s report, enterprises will increase their adoption rate of observability tools by 30% by 2024. In recent years, the emergence of technologies such as big data, artificial intelligence, and machine learning has accelerated the adoption of observability practices in organizations. Harnessing these advanced tools (to name a few top open-source observability tools – Prometheus, Grafana, OpenTelemetry) empowers organizations to become more agile and responsive in their decision-making processes. 

How Cybersecurity Leaders Can Capitalize on Cloud and Data Governance Synergy

In today’s modern organizations, explosive amounts of digital information are being used to drive business decisions and activities. However, both organizations and individuals may not have the necessary tools and resources to effectively carry out data governance at a large scale. I’ve experienced this scenario in both large private and public sector organizations: trying to wrangle data in complex environments with multiple stakeholders, systems, and settings. It often leads to incomplete inventories of systems and their data, along with who has access to it and why. Cloud-native services, automation, and innovation enable organizations to address these challenges as part of their broader data governance strategies and under the auspices of cloud governance and security. Many IaaS hyperscale cloud service providers offer native services to enable activities such as data loss protection (DLP). For example, AWS Macie automates the discovery of sensitive data, provides cost-efficient visibility, and helps mitigate the threats of unauthorized data access and exfiltration.

How Not to Use the DORA Metrics to Measure DevOps Performance

Part of the beauty of DevOps is that it doesn't pit velocity and resilience against each other but makes them mutually beneficial. For example, frequent small releases with incremental improvements can more easily be rolled back if there's an error. Or, if a bug is easy to identify and fix, your team can roll forward and remediate it quickly. Yet again, we can see that the DORA metrics are complementary; success in one area typically correlates with success across others. However, driving success with this metric can be an anti-pattern - it can unhelpfully conceal other problems. For example, if your strategy to recover a service is always to roll back, then you’ll be taking value from your latest release away from your users, even those that don’t encounter your new-found issue. While your mean time to recover will be low, your lead time figure may now be skewed and not account for this rollback strategy, giving you a false sense of agility. Perhaps looking at what it would take to always be able to roll forward is the next step on your journey to refine your software delivery process.

Quote for the day:

"A little more persistence, a little more effort, and what seemed hopeless failure may turn to glorious success." -- Elbert Hubbard

Daily Tech Digest - April 28, 2023

CISOs Rethink Data Security With Info-Centric Framework

"Data is on a logarithmic curve; for every amount of data that I have next year, it's probably 2.5 times more than the amount of data I had this year," he says. "We're data hoarders, for lack of a better term; no one wants to get rid of people's information who have signed up to websites and forums and everything else, so we have this enormous data sprawl. That, in turn, leaves behind security blind spots." Further adding to the challenge is the fact that some data is of course more sensitive than other information, and some information doesn't need protecting at all, Rushing points out. And there's dynamism in terms of defining appropriate security levels as data ages. He uses a product launch to illustrate his point. "With a product release, we start off with a situation where no one knows about it, everything's embargoed, and you're protecting this important intellectual property," he explains. "And the next thing you know, it's released for public consumption. And it's suddenly not top secret anymore, in fact, you want the whole world to know about it."

How ‘Data Clean Rooms’ Evolved From Marketing Software To Critical Infrastructure

Data clean rooms as we know them today represent the first phase in leveraging “clean data.” User privacy is protected, while advertisers retain access to the necessary information. This model is now being extended and expanded upon in the enterprise. It is no longer about just protecting personal data. Companies need to act fast on data-derived insights, and therefore cannot compromise efficiency and collaborative abilities. They need truly comprehensive and dynamic data-sharing capabilities that can be quickly configured with little code and setup. ... As one of the key reasons for data clean rooms is the expanding IoT, businesses increasingly find themselves needing to demonstrate the provenance and veracity of their IoT data for business transactions or regulatory requirements. A data clean room must provide a single pane of glass for the trust and protection of IoT devices, the data they transmit and their data operations. This will require the need to authenticate IoT devices, protect the data as it travels from the device to the cloud and back to the device, and provide additional data points for audits.

ACID Transactions Change the Game for Cassandra Developers

For years, Apache Cassandra has been solving big data challenges such as horizontal scaling and geolocation for some of the most demanding use cases. But one area, distributed transactions, has proven particularly challenging for a variety of reasons. It’s an issue that the Cassandra community has been hard at work to solve, and the solution is finally here. With the release of Apache Cassandra version 5.0, which is expected later in 2023, Cassandra will offer ACID transactions. ACID transactions will be a big help for developers, who have been calling for more SQL-like functionality in Cassandra. This means that developers can avoid a bunch of complex code that they used for applying changes to multiple rows in the past. ... The advantage of ACID transactions is that multiple operations can be grouped together and essentially treated as a single operation. For instance, if you’re updating several points of data that depend on a specific event or action, you don’t want to risk some of those points being updated while others aren’t. ACID transactions enable you to do that.

Corporate boards pressure CISOs to step up risk mitigation efforts

The report also found that general misunderstandings in common cyber risk terminology could be a deterrent in developing effective strategies and communicating risk to company leadership. Cyberattacks have been increasing for several years now and resulting data breaches cost businesses an average of $4.35 million in 2022, according to an IBM report. Given the financial and reputational consequences of cyberattacks, corporate board rooms are putting pressure on CISOs to identify and mitigate cyber/IT risk. Yet, despite the new emphasis on risk management, business leaders still don’t have a firm grasp on how cyber risk can impact different business initiatives—or that it could be used as a strategic asset and core business differentiator. To better understand the current cybersecurity and IT risk challenges companies are facing, as well as steps executives are taking to combat risk, RiskOptics fielded a survey of 261 U.S. InfoSec and GRC leaders. Respondents varied in job level from manager to the C-Suite and worked across various industries.

What is the Spotify model in agile?

The Spotify model is just the autonomous scaling of agile, as hinted at in the paper’s name. It’s based on agile principles and unique features specific to Spotify’s organizational structure. This framework became wildly popular and was dubbed the “Spotify model,” with Henrik Kniberg credited as the inventor. ... Every other company wanted to adopt this framework for themselves. Spotify enjoyed a reputation for being innovative, and people assumed that if this framework worked so well for Spotify, it must also work great for them. Companies began to feel as if this framework was perfect, but nothing is perfect Spotify has changed its practices and ways of working over time — adapting its strategies and methodologies to changes in the market, user preferences, and more. The Spotify model itself was built with the company’s culture, values, and organizational structure in mind, with the ultimate goal of promoting cross-collaboration and innovation. As a result, it’s not a one-size-fits-all — the Spotify model was built around a foundation the company had already laid out.

Embracing zero-trust: a look at the NSA’s recommended IAM best practices for administrators

Knowing that credentials are a key target for malicious actors, utilizing techniques such as identity federation and single sign-on can mitigate the potential for identity sprawl, local accounts, and a lack of identity governance. This may involve extending SSO across internal systems and also externally to other systems and business partners. SSO also brings the benefit of reducing the cognitive load and burden on users by allowing them to use a single set of credentials across systems in the enterprise, rather than needing to create and remember disparate credentials. Failing to implement identity federation and SSO inevitably leads to credential sprawl with disparate local credentials that generally aren’t maintained or governed and represent ripe targets for bad actors. SSO is generally facilitated by protocols such as SAML or Open ID Connect (OIDC). These protocols help exchange authentication and autorization data between entities such as Identity Providers (IdP)’s and service providers. 

10 habits of people who are always learning new things

They’re the ones who are infinitely curious about the world around them – those who take things apart to find out how they work, or go on nature walks and prod everything with a stick, or do science experiments… Reading helps them stay informed about the world, learn from others’ experiences, and develop new perspectives. Whether it’s books, articles, blogs, or even social media, they make a habit of consuming content that feeds their mind and broadens their horizons. And they don’t just read books because they have to. No, they WANT to; they read just for pleasure and personal growth, across various genres and subjects. That’s why they have a well-rounded knowledge base and are very open-minded about other people’s perspectives! Just because you’ve gotten goal-setting down to an art doesn’t mean it’s all smooth sailing. Of course not. You’ll definitely be making mistakes. But mistakes don’t have to get you down. In fact, mistakes are perfect vehicles for learning, but only if you have a growth mindset.

How Security Leaders Should Approach a Challenging Budgeting Environment

Organizations need to understand that cybercriminals don’t care about the scope of the security controls. CIOs and CISOs cannot continue to operate in the dark without confidence about how well processes work; they need an understanding of what needs to be protected beyond the classical understanding of cybersecurity coverage. That means addressing cybersecurity from a business perspective. CISOs and CIOs can gain complete insight into the security posture and performance by converging tools like SIEM, SOAR, UEBA and business-critical security solutions, expanding the visibility beyond the IT infrastructure and into business-critical applications that contain invaluable information. A converged security solution can turn unqualified alerts into real, actionable intelligence by adding contextual information and automating responses. Another important thing to be mindful of is the pricing model for security solutions. Many are based on data volumes, which means the pricing is continuously increasing and unpredictable.

Is Web3 tech in search of a business model?

Changing their business models. I have worked with various startups in the distributed ledger technology space and what they were doing may have been revolutionising industries but all they were doing was replacing one technology with a new one without replacing the business model. The biggest challenge is how to use this new democratic power with its lower transaction costs and improved security to create new business models. The question is how to come up with such a commercial model? How can you monetise the system? In every other system, you monetise the system by creating a middleman. But the ultimate benefit of DLT is to do away with all middlemen. My fear is that all businesses will do if find a new way of creating new intermediaries using this technology. By definition, a business makes profits by adding value and that value is created through economies of scale and adding some form of brokerage in the process. The irony is that the whole purpose of this technology is to do away with the concept of the transaction, which is what capitalism is based on!

Defending Against the Evolving Infostealer Malware Threat

Flatley says employee education is very important in this space, as helping employees understand why following security policies is important will encourage compliance. “People are more apt to follow the rules if they understand the consequences. However, no amount of training will reduce this risk enough,” he says. That means security policies need to be enforced by technical means that are designed to prevent accidental or intentional non-compliance. “Even more important, we must understand that no amount of training or technical defenses will entirely stop this threat,” he says. Organizations must not only instrument a network to detect malicious activity and craft formal plans for remediating stolen identity information well in advance, but they must also practice them well so an attack can be acted upon quickly. Hazam points to education training tools such as simulated phishing attacks, which can help employees recognize and respond to real phishing emails, and gamified training programs, which can make the training more engaging and enjoyable for employees.

Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - April 27, 2023

How can we build engagement in our organization’s data governance efforts?

The first thing to recognize is that establishing a data governance initiative is a change program—not a one-off project. Successful data governance programs change behaviors around how data is used, and changing behaviors takes time. Top-down impositions of data governance based on theory and text-heavy policies often fail to build engagement because they are detached from organizational context. The most successful transformations we have seen are the result of an organic development of data governance from organization and culture. This requires intentional communication, iteration, and open feedback based on listening to stakeholders and users. Communicate the benefits of data governance by emphasizing the positive impact the program can have on your organization’s ability to achieve its strategic objectives, such as improving decision-making, enhancing data quality, and ensuring regulatory compliance. Organizations must be willing to accept that there will be challenges and pushback to the program. 

The State of Organizations 2023: Ten shifts transforming organizations

‘True hybrid’: The new balance of in-person and remote work. Since the COVID-19 pandemic, about 90 percent of organizations have embraced a range of hybrid work models that allow employees to work from off-site locations for some or much of the time. It’s important that organizations provide structure and support around the activities best done in person or remotely. ... Closing the capability chasm. Companies often announce technological or digital elements in their strategies without having the right capabilities to integrate them. To achieve a competitive advantage, organizations need to build institutional capabilities—an integrated set of people, processes, and technology that enables them to do something consistently better than competitors do. ... Walking the talent tightrope. Business leaders have long walked a talent tightrope—carefully balancing budgets while retaining key people. In today’s uncertain economic climate, they need to focus more on matching top talent to the highest-value roles. McKinsey research shows that, in many organizations, between 20 and 30 percent of critical roles aren’t filled by the most appropriate people.

How prompt injection can hijack autonomous AI agents like Auto-GPT

A new security vulnerability could allow malicious actors to hijack large language models (LLMs) and autonomous AI agents. In a disturbing demonstration last week, Simon Willison, creator of the open-source tool datasette, detailed in a blog post how attackers could link GPT-4 and other LLMs to agents like Auto-GPT to conduct automated prompt injection attacks. Willison’s analysis comes just weeks after the launch and quick rise of open-source autonomous AI agents including Auto-GPT, BabyAGI and AgentGPT, and as the security community is beginning to come to terms with the risks presented by these rapidly emerging solutions. In his blog post, not only did Willison demonstrate a prompt injection “guaranteed to work 100% of the time,” but more significantly, he highlighted how autonomous agents that integrate with these models, such as Auto-GPT, could be manipulated to trigger additional malicious actions via API requests, searches and generated code executions. Prompt injection attacks exploit the fact that many AI applications rely on hard-coded prompts to instruct LLMs such as GPT-4 to perform certain tasks. 

Agility and Architecture

When making architectural decisions, teams balance two different constraints:If the work they do is based on assumptions that later turn out to be wrong, they will have more work to do: the work needed to undo the prior work, and the new work related to the new decision. They need to build things and deliver them to customers in order to test their assumptions, not just about the architecture, but also about the problems that customers experience and the suitability of different solutions to solve those problems. No matter what, teams will have to do some rework. Minimizing rework while maximizing feedback is the central concern of the agile team. The challenge they face in each release is that they need to run experiments and validate both their understanding of what customers need but also the viability of their evolving answer to those needs. If they spend too much time focused just on the customer needs, they may find their solution is not sustainable, but if they spend too much time assessing the sustainability of the solution they may lose customers who lose patience waiting for their needs to be met.

Beginning of the End of OpenAI

Maybe OpenAI was not anticipating its success with ChatGPT technology back then. Now, the explanation for the trademark application can be just so that no one clones the company makes the most sense currently. Or maybe not. Maybe the Sam Altman led company has bigger plans. The company had already registered with AI.com to redirect it to ChatGPT — a pretty strong statement. Well, now that the AI arms race is in full glory, there might be something that Google can do as well to catch up. Up until now, Google made strides by improving its technology, but it might have another trick up its sleeve. If OpenAI files for a trademark on ‘GPT’, which is more than just a product name, but a name of technology, and the USPTO accepts it or even considers it, the application will be moved for an ‘opposition period’. ... OpenAI may be getting a bit too possessive about their products. GPT stands for Generative Pre-trained Transformers and interestingly, ‘Transformer’ was introduced by Google in 2017 as a neural network architecture, for which the company has also filed a patent.

Macro trends in the tech industry

Managing tech debt and maintaining system health are essential for the long-term success of any product or system. Tech debt has beenin the news cycle over the last six months, but it’s certainly not a new concept. We’re happy that it’s being discussed, but ultimately managing tech debt is not rocket science: good product managers and tech leads should already be considering cross-functional requirements, including tech debt management. Fitness functions can identify and measure important quality characteristics, and we can describe tech debt in terms of how it may improve those characteristics. ... As low-code and no-code platforms continue to evolve and mature — and especially because these tools are likely to be augmented with AI enabling them to produce applications faster or for less expert users — we decided to reiterate our advice around bounded low-code platforms. We remain skeptical because the vendor claims around these tools are, basically, dangerously optimistic. There are no silver bullets and a low-code platform should always be evaluated in context as a potential solution, not used as a default option.

7 venial sins of IT management

First of all, comparing the two, being a business person is easier. Second of all, unless you think the company’s CFO should be a business person, not a finance person, and that the chief marketing officer should be a business person and not a marketeer, the whole thing just isn’t worth your time and attention. But since I have your attention anyway, here’s the bad news about the good news: CIOs who try to be business people instead of technology people are like the high school outcasts who are desperately trying to join the Cool Kids Club. They’ll still be excluded, only now they’ve added being pathetic to their coolness deficit. ... Product management is the business discipline of managing the evolution of one of a company’s products or product lines to maintain and enhance its marketplace appeal. IT product management comes out of the agile world, and has at best a loose connection to business product management. Because while there is some limited point in enhancing the appeal of some chunk of a business’s technology or applications portfolio, that isn’t what IT product management is about.

UK government introduces Digital Markets Bill to Parliament

CMA chief executive Sarah Cardell welcomed the Bill and the powers it granted to the competition regulator. “This has the potential to be a watershed moment in the way we protect consumers in the UK and the way we ensure digital markets work for the UK economy, supporting economic growth, investment and innovation,” she said. “Digital markets offer huge benefits, but only if competition enables businesses of all shapes and sizes the opportunity to succeed,” said Cardell. “This Bill is a legal framework fit for the digital age. It will establish a tailored, evidenced-based and proportionate approach to regulating the largest and most powerful digital firms to ensure effective competition that benefits everyone.” She added that the CMA will support the Bill through the legislative process, and that it stands ready to use these powers once it has been approved by Parliament. Baroness Stowell, chair of the House of Lords Communications and Digital Committee, which called for the creation of a new digital regulator like the DMU in March 2019, said the Bill is about ensuring a level playing field in digital markets.

Spring Cleaning the Tech Stack

As a company matures, part of the natural process is accumulating a plethora of applications along the way, which then requires IT to routinely evaluate to eliminate waste. Richard Capatosto, IT manager at Backblaze, explains IT spends a lot of time and energy tracking down, identifying, and operationalizing these “rogue” applications. “They are typically very inefficient to support for several reasons,” he says. “First, they are sometimes one-off apps which were purchased outside of our enterprise applications stack and may not have enterprise-level security.” Usually in those instances, they’ve been purchased outside of normal processes (e.g., on credit cards), which creates further downline work. “Second, these applications often do not support enterprise SSO and provisioning, which is key to maintaining efficient and secure IT operations,” he says. Eliminating or upgrading these applications reduces unnecessary spend, conforms to security best practices, and lets the IT team provide guidance about better tech-based workflows based on existing and potential applications.

Generative AI and security: Balancing performance and risk

From a security perspective, it’s both appealing and daunting to imagine an ultra-smart, cloud-hosted, security-specific AI beyond anything available today. In particular, the sheer speed offered by an AI-powered response to security events is appealing. And the potential for catastrophic mistakes and their business consequences is daunting. As an industry observer, I often see this stark dichotomy reflected in marketing, like that of the recently-launched Microsoft Security Copilot. One notices Microsoft’s velocity-driven pitch – “triage signals at machine speed” and “respond to incidents in minutes, instead of hours or days.” But one also notices the cautious conservatism of the product name: it’s not a pilot, it’s merely a copilot. Microsoft doesn’t want people getting the idea that this tech can, all by itself, handle the complex job of creating and executing a company’s cybersecurity strategy. That, it seems to me, is the approach we should all be taking to these tools, while carefully considering what type of data can and should be fed to these algorithms. 

Quote for the day:

"Time is neutral and does not change things. With courage and initiative, leaders change things." -- Jesse Jackson

Daily Tech Digest - April 26, 2023

How to vet your vendors: Ensuring data privacy and security compliance

Equally as important is ensuring that the vendors actually adhere to regulatory requirements and checking what data privacy infrastructure and security measures they have in place. Do they employ permission and user access controls, employee security awareness, patch management, system configuration management and periodic penetration testing? How do they handle data subject concerns? Do they notify new data subjects? Is there an opt-in/opt-out feature? Are databases accurate, and are they updated regularly based on customer feedback and privacy requests? ... Finally, ask about the organization’s overall mindset and handling of data security and privacy. Have they made it a priority across their organization? Do ALL employees receive data and privacy-related training, even if the entire team doesn’t work on those issues directly? A third-party partner that goes above and beyond in this capacity will make for a more reliable and proactive partner across the board.

Z Energy’s CDO: ‘First trust, then transform’

My view on transformation—digital transformation, in particular—is we’re moving toward an endpoint. Lots of people will say it’s ever-changing, and I agree that, from a technology point, it is. But to me, the endpoint is an agile organization, and I don’t mean agile as in the way we think about doing work, but a nimble organization. If you can transform your organization to the point where it’s able to rapidly respond to whatever happens, then that’s the transformation. So, is there an endpoint to that? There are always tweaks along the way, but you can see organizations move from being static to being able to deal with whatever comes at them. That’s relevant to us at Z, because you could say, “In 40 years’ time, there’s no future in hydrocarbons.” That might happen in 10 years or 100 years. I have no idea which of those is true, and I have to be ready for all of them. We also don’t know what the replacements are going to be. Are we looking at electricity, hydrogen? What’s the role of biofuels here? All of those things are rapidly changing. The Prime Minister actually just announced that the biofuels mandate is now going to be cancelled, so how do we respond to that?

Can this new prototype put an end to cyberattacks?

The new prototype, called the Arm Morello Evaluation Board, aims to put an end to this. It is based on the CHERI (capability hardware enhanced RISC instructions) instruction set architecture, which was developed by Cambridge University and SRI International. It is compartmentalized to ensure that any breaches remain confined to a particular aspect, rather than spreading throughout the whole system. This is just one of the scenarios where CHERI's memory-safe features come in handy. Access to the technology was facilitated by the Digital Security by Design (DSbD), a government-backed initiative that aims to improve the safety of the UK's digital landscape. Although it is still in the research phase, the prototype is claimed to have the potential to help protect industries and firms. already, the programme has racked up over a thousand days in development work wot other 13 million lines of code being experimented with. There will also be a new round of experiments starting from May 25, which will explore porting the Morello platform, as well as how the CHERI architecture can secure applications against memory flaws and whether code can be improved by highlighting errors and vulnerabilities.

Don’t Let Time Series Data Break Your Relational Database

Time series is all about understanding the current picture of the world and offering immediate insight and action. Relational databases can perform basic data manipulation, but they can’t execute advanced calculations and analytics on multiple observations. Because time series data workloads are so large, they need a database that can work with large datasets easily. Apache Arrow is specifically designed to move large amounts of columnar data. Building a database on Arrow gives developers more options to effectively operate on their data by way of advanced data analysis and the implementation of machine learning and artificial intelligence tools such as Pandas. Some may be tempted to simply use Arrow as an external tool for a current solution. However, this approach isn’t workable because if the database doesn’t return data in Arrow format right from the source, the production application will struggle to ensure there’s enough memory to work with large datasets. The code source will also lack the compression Arrow provides. 

When cloud pros fumble office politics

The adoption of cloud services can create tension between early adopters and those who are resistant to change. Early adopters may feel frustrated by the resistance of others, while those who are resistant may feel excluded from decision-making processes and overwhelmed by the pace of change. The fix here is education and empathy. I’m often in the middle between factions that both feel threatened by the pace of cloud adoption. One group believes that it’s too fast; the other believes it’s too slow. Both sides need to hear each other out and adapt a pace that seems reasonable—and more importantly, that returns the most value back to the business. ... Cloud services can raise concerns about security and privacy, particularly in industries that store sensitive data. Employees may be worried about the security of their personal data, while IT departments may be stressed about the security of company data stored in the cloud. Of course, cloud-based security has been better than traditional security for some time now. But that’s not the perception, and you’re dealing with perceptions, not realities.

Where did Microservices go

One of the most significant hurdles is conducting transactions across multiple services. Although there are several methods for handling distributed transactions, such as the two-phase commit protocol, compensating transactions, event-driven architectures, and conflict-free replicated data types, none of them can provide the same simplicity that developers enjoy in a monolithic architecture with a database that offers transaction functionality. When things go wrong in a distributed system, data inconsistency can arise, which is perhaps the worst problem a developer wants to deal with. ... Serverless computing is actually an evolution of Microservices architecture instead of a replacement. Both approaches share the same goal of breaking down monolithic applications into smaller, more manageable components. However, while microservices typically involve deploying each service to a separate container or instance, serverless computing allows developers to focus solely on the code for individual functions, without worrying about the underlying infrastructure.

How AI Can Transform The Software Engineering Process

Architecture definition - As far as app architecture goes, AI cannot evaluate the trade-offs between different architectural decisions. So it will still rely on the intuition and experience of a senior developer for the most part. Nevertheless, AI can drill down the architecture by suggesting relevant services from public cloud providers or calculating the TCO of the target architecture. Coding - Writing code is one of the areas that will definitely benefit from AI. For example, when using Bing AI, the role of senior engineers will be to verify and polish the code since the tool still makes mistakes. A new method for developing code will be applied widely: prompt engineering. It will be used for generating code snippets based on given prompts, facilitating prototyping and iterating on different ideas. Unit tests. Since unit tests are typically automated, they are one of the areas where AI will be most useful. For example, CodeWhisperer does an excellent job at automating unit tests.

Welcome to the postmodern enterprise architecture era

Postmodern enterprise architecture is geared toward the computer science world as we understand it today. The talent pool has greatly expanded, and while there are still talent shortages, the ability to build and retain a high-performing team is within any company's grasp. The software and hardware building blocks have greatly matured; computing environments can be set up or resized in minutes, and complex user experiences can be built out of commodity parts. The wall between the business and engineers is crumbling, with cross-functional agile teams working together to incrementally improve with each (anytime you need to) release. Instead of systems, we are thinking more and more about platforms that both architects and our business partners can adapt for use in the latest customer experience. In this postmodern world, we need an enterprise architecture function that is built for today. Good news: We don't have to start from scratch. We have developed many great practices and utilities on the journey to modern enterprise architecture, and now we must consider how to use those tools cost-effectively.

Clocking out: Millennials and the workforce

In perhaps the finest section of Saving Time, Odell comes across an “embarrassingly spot-on characterization” of her own life in an academic paper. The sociologist Hartmut Rosa sketches out the life and habits of a fictitious professor named Linda. Linda has a job and some means, but she feels she is chronically busy, “always falling short and running behind” her various commitments. It is possible to be genuinely ensnared by a lack of time—there are those who have to work multiple jobs to pay the rent while also raising children—but Rosa argues that Linda’s predicament is self-generated. According to Odell’s analysis, Linda sees herself as “controlled and surveilled” by society’s expectation that she be busy and productive at all times, by what Rosa neatly calls the “logic of expansion.” This concept has been so thoroughly ingrained that it has been adopted even by those with plenty of agency. This analysis is squeezed into the barnstorming first half of Saving Time. 

9 Questions for IT Leaders to Ask About Cloud Cybersecurity

Visibility and context are two of the top challenges in cloud cybersecurity, according to Rick McElroy, principal cybersecurity strategist at cloud computing company VMware. “Who is logging in to what and when? Who is uploading private documents to public file shares? How can I follow an identity around a multi-cloud environment to determine if it is doing something malicious? Is this PowerShell script something my system administrators are using or is it part of a ransomware attack?” he asks. “These are all hard questions to answer for teams today.” Amit Shaked, co-founder and CEO of multi-cloud data security platform Laminar, warns about the increase in unknown or “shadow data.” “Data scientists and developers can now proliferate data in just a few clicks with agile cloud services,” he explains. “As a result, it's become easier than ever before for IT and security teams to lose sight of this data.” Bringing together teams that have historically worked in siloes can help to increase cloud visibility and teams’ ability act on security needs.

Quote for the day:

"You either have to be first, best, or different." -- Loretta Lynn

Daily Tech Digest - April 24, 2023

Is Strategic Thinking Dead? 5 Ways To Think Better For An Uncertain Future

Strategic thinking is distinguished from tactical thinking because it takes a longer view rather than reacting to events as they happen. It pushes you to be proactive in your actions, rather than reactive. And even in the addressing the immediate, strategic thinking can actually increase your effectiveness—because your advanced planning will have given you the opportunity to explore potential situations, assess responses and judge outcomes—and these can prepare you for how you react when you have less runway. ... One of the hallmarks of a strategic thinker is clarity of purpose. Be sure you’re clear about where you want to go—as an individual, a team or a business. Know your true north because it will help you choose wisely among multiple options. The language you choose to describe where you want to be (or how you understand a challenge) will constrain or create possibilities, so also be careful about how you describe your intentions. If your purpose is to unleash human potential for students, that will likely take you farther than a goal to simply provide great classroom experiences. 

Why Backing up SaaS Is Necessary

Looking at the possibilities to protect their data on those SaaS-platforms, organisations started to quickly realise that their SaaS solutions were not as protected as their other applications run in their own datacentre or their private cloud. Companies that did know that fact had to put up with it as the product forced them to use it as it was. Users had to learn the hard way that most SaaS solutions have a shared responsibility model where the customer is responsible for his or her own data. ... Even more critically, it’s important to ensure backups are stored in an independent cloud dedicated to data protection and not dependent on one of the large hyperscalers. A third-party cloud gives total control over backed up data and can easily ensure three to four copies are always made and reside in multiple locations. By retaining SaaS data in an independent backup-focused cloud, customers can also avoid the egress charges that come part and parcel with the public cloud. These extra charges often result in surprise bills after data restores and make it difficult to budget.

7 steps to take before developing digital twins

Leaders in any emerging technology area look for stories to inspire adoption. Some should be inspirational and help illustrate the art of the possible, while others must be pragmatic and demonstrate business outcomes to entice supporters. If your business’s direct competitors have successfully deployed digital twins, highlighting their use cases often creates a sense of urgency. ... Harry Powell, head of industry solutions at TigerGraph, says, “When creating a digital twin of a moderately sized organization, you will need millions of data points and relationships. To query that data, it will require traversing or hopping across dozens of links to understand the relationships between thousands of objects.” Many data management platforms support real-time analytics and large-scale machine learning models. But digital twins used to simulate the behavior across thousands or more entities, such as manufacturing components or smart buildings, will need a data model that enables querying on entities and their relationships

Enterprise Architecture Management (EAM) in digital transformation

The point is to accompany these “things” throughout their entire life cycle on the basis of a coherent technology vision, to recognise innovation potential, to identify technology risks, to derive a technology strategy. And often EAM already fails because of this corporate language, because the mostly abstract orders, including abstract or economic business language, come directly from the board and “have to be implemented”. There is usually no budgeting, because “everyone has to participate”. This is the reality, and EAM is ground between the board and development and operations. ... What could be a benefit of EAM? You always have to think about this question in the context of your own company! A TOGAF copy of the EAM goals or principles is not helpful, e.g. “The primary goal of EAM is cost reduction”. Has never worked. Yes, it may be that costs can be reduced. But EAM always brings more quality, and the cost savings are not accounting, they always go straight into new methods or procedures: a better overview of the applications enables projects to start faster, the time gained and the less effort is immediately put into sensible other efforts.

Online Safety Bill could pose risk to encryption technology used by Ukraine

The Online Safety Bill will give the regulator, Ofcom, powers to require communications companies to install technology, known as client-side scanning (CSS), to analyse the content of messages for child sexual abuse and terrorism content before they are encrypted. The Home Office maintains that client-side scanning, which uses software installed on a user’s phone or computer, is able to maintain communications privacy while policing messages for criminal content. But Hodgson told Computer Weekly that Element would have no choice but to withdraw its encrypted mobile phone communications app from the UK if the Online Safety Bill passed into law in its current form. Element supplies encrypted communications to governments, including the UK, France, Germany, Sweden and Ukraine. “There is no way on Earth that any of our customers would every consider that setup [client-side scanning], so obviously we wouldn’t put that into the enterprise product,” he said. “But it would also mean that we wouldn’t be able to supply a consumer secure messaging app in the UK. ...” he added.

The biggest data security blind spot: Authorization

When authorization is overlooked, companies have little to no visibility into who is accessing what. This makes it challenging to track access, identify unusual behavior, or detect potential threats. It also leads to having “overprivileged” users – a leading cause of data breaches according to many industry reports. Authorization oversight is critical when employees leave a company or change roles within the organization, as they might retain access to sensitive data they no longer need. If access rights never expire, unauthorized users have access to sensitive data. And with layoffs, the risk of data theft increases. The lack of proper authorization also puts companies at risk of non-compliance with privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which can result in significant penalties and reputational damage. Most organizations store sensitive data in the cloud, and the majority do so without any kind of encryption, making proper authorization all the more necessary.

AI can write your emails, reports, and essays. But can it express your emotions? Should it?

What do we lose when we outsource expressing our emotions to an AI chatbot? We've all heard that sitting on our emotions and feeling them is how we process them and get the intensity to pass. Speaking from the heart about a complex, heavy topic is one way we can feel true catharsis. AI can't do that processing for us. There's a common theme during periods of technological innovation that technology is supposed to do the mundane, annoying, dangerous, or insufferable tasks that humans hate doing. Many of us would sometimes prefer to avoid emotional processing. But experiencing complex emotions is what makes us human. And it's one of the few things an AI model as advanced as ChatGPT can't do. If you think of expressing emotions as less of an experience and more of a task, it might seem clever to automate them. But you can't conquer human emotions by passing the unsavory parts of them to a language model. Emotions are critical to the human experience, and denying them their place within yourself can lead to unhealthy coping mechanisms and poor physical health.

Benefits of data mesh might not be worth the cost

Data mesh might be a good framework for businesses that acquire companies but don't consolidate with them, thus wanting a decentralized approach to most or even all of the individual companies' data, Thanaraj said. It might also be a good option for large organizations that operate in multiple countries. These organizations' leaders might want to -- and are sometimes required to -- maintain local data autonomy. "That's where I see data mesh being a much more appropriate data architecture to apply," Thanaraj said. Still, questions remain about the long-term value of data mesh. In fact, Gartner labeled data mesh as "obsolete before plateau" in its 2022 "Hype Cycle for Data Management." Moreover, organizations could more readily use other better-defined and more easily implemented approaches to improve their data programs, Aiken said. Organizations have DataOps, existing data management frameworks and data governance practices at their disposal. If a data program doesn't follow best data management practices, data mesh won't improve it. "Those improvements could be achieved by other practices that don't have a buzz around them like data mesh," he said.

Do the productivity gains from generative AI outweigh the security risks?

In short, using generative AI to code is dangerous, but its efficiencies are so great that it will be extremely tempting for corporate executives to use it anyway. Bratin Saha, vice president for AI and ML Services at AWS, argues the decision doesn’t have to be one or the other. How so? Saha maintains that the efficiency benefits of coding with generative AI are so sky-high that there will be plenty of dollars in the budget for post-development repairs. That could mean enough dollars to pay for extensive security and functionality testing in a sandbox — both with automated software and expensive human talent — and the very attractive spreadsheet ROI. Software development can be executed 57% more efficiently with generative AI — at least the AWS flavor — but that efficiency gets even better if it replaces les experienced coders, Saha said in a Computerworld interview. “We have trained it on lots of high-quality code, but the efficiency depends on the task you are doing and the proficiency level,” Saha said, adding that a coder “who has just started programming won’t know the libraries and the coding.”

The staying power of shadow IT, and how to combat risks related to it

The problem, when it comes to uncovering shadow IT, is that information about what applications exist and who has access to them is spread across a company, in many different silos. It lives in the files of sometimes hundreds of business application owners – end-users in marketing, sales, customer service, finance, HR, product development, legal and other departments who acquired the applications. How do most organizations go about finding this data? They send emails, Microsoft Teams or Slack messages to employees asking them to notify IT if they have purchased or signed up for a free app, and who they’ve given access to (and hope everyone will respond). Then IT manually inputs any information they get into a spreadsheet. ... The data must be automatically and continuously collected and normalized. It must be made available to all SaaS management stakeholders, from the people who own and must therefore take responsibility for managing their apps, to IT leaders and admins, IT security teams, procurement managers, and more.

Quote for the day:

“Unless we are willing to go through that initial awkward stage of becoming a leader, we can’t grow.” -- Claudio Toyama