Daily Tech Digest - January 14, 2025

Why Your Business May Want to Shift to an Industry Cloud Platform

Industry cloud services typically embed the data model, processes, templates, accelerators, security constructs, and governance controls required by the adopter's industry, says Shriram Natarajan, a director at technology research and advisory firm ISG, in an online interview. "This [approach] allows faster development of new functionality, better security and governance, and an enhanced and user/stakeholder experience." ... Enterprises spanning many industries can benefit significantly by moving to an industry cloud platform, Campbell says. "Businesses that are faced with many regulations and operational requirements can especially benefit from the specialized services industry cloud platforms," he notes, adding that many industry cloud platforms are preconfigured to meet specific needs, which can help accelerate the time to value realized. Many enterprises have a blinkered view on verticalized solutions, Natarajan says. "They tend to see the platforms they already have in-house and look for solutions that these platforms provide." He believes that enterprise IT and business teams can both benefit from looking at the landscape of verticalized industry cloud platforms.


FRAML Reality Check: Is Full Integration Really Practical?

While integration between AML and fraud teams is a desirable goal, experts say it should not be viewed as the best solution. Paul Dunlop, insider risk consultant at a financial services firm, stressed the importance of collaboration over integration. "I am against the oversimplification of fraud and AML integration. Banking risks are multifaceted, involving not just fraud and AML but also cybersecurity, privacy and other domains," Dunlop said. "Integration decision should be assessed based on the bank's maturity level, regulatory environment and unique operational needs." "Cost should not be the sole factor behind this decision. One must assess operational and risk management trade-offs," he said. Meng Liu, senior analyst at Forrester, said that despite AML and fraud being two distinct functions at present, the trend toward more consolidated and integrated financial crime management is real. ... Despite the differences in fraud and AML teams, some use cases, such as scams, human trafficking and child exploitation, cry out for better collaboration, Mitchell said. "These require shared data and aligned strategies." But high-volume fraud detection such as check and card fraud is less suited for joint efforts due to operational complexity.


Ransomware abuses Amazon AWS feature to encrypt S3 buckets

In the attacks by Codefinger, the threat actors used compromised AWS credentials to locate victim's keys with 's3:GetObject' and 's3:PutObject' privileges, which allow these accounts to encrypt objects in S3 buckets through SSE-C. The attacker then generates an encryption key locally to encrypt the target's data. Since AWS doesn't store these encryption keys, data recovery without the attacker's key is impossible, even if the victim reports unauthorized activity to Amazon. "By utilizing AWS native services, they achieve encryption in a way that is both secure and unrecoverable without their cooperation," explains Halcyon. Next, the attacker sets a seven-day file deletion policy using the S3 Object Lifecycle Management API and drops ransom notes on all affected directories that instruct the victim to pay ransom on a given Bitcoin address in exchange for the custom AES-256 key. ... Halcyon also suggests that AWS customers set restrictive policies that prevent the use of SSE-C on their S3 buckets. Concerning AWS keys, unused keys should be disabled, active ones should be rotated frequently, and account permissions should be kept at the minimum level required.


How AI and ML are transforming digital banking security

By continuously learning from new data, ML improves over time, adapting to the organization’s needs and the ever-evolving fraud tactics. This supports reducing false positives, ensuring legitimate transactions proceed smoothly while maintaining security. Predictive analytics also help identify potential threats before they materialize, and fraud scoring prioritizes high-risk activities for action. AI/ML-powered systems are scalable and effective against sophisticated threats, such as synthetic identity fraud and account takeovers, and can monitor multiple banking channels simultaneously. They automate detection, lowering operational costs, and providing seamless customer experiences, thereby enhancing trust. However, nothing is a silver bullet and considerations must be made to things such as algorithm bias, data privacy concerns, and the need for explainable models persist. Still, despite these potential hurdles, AI and ML are reshaping digital banking security, equipping financial institutions with proactive tools to counter fraud while safeguarding customer trust and regulatory compliance. ... Advanced technologies like AI and ML are helping institutions monitor transactions in real time, detecting anomalies and preventing fraud without directly involving users. Meanwhile, encryption and tokenization protect sensitive data, ensuring transactions remain secure in the background.


The Evolution of Business Systems in the Digital Era

Systems of Record (SORs) serve as the foundation of organizational infrastructure, storing essential data such as customer information, financial transactions, and operational processes. These systems are designed to maintain structured and reliable records, ensuring data integrity, compliance, and security. They play a critical role in regulatory reporting, audits, and operational consistency. ... Systems of Engagement (SOEs) are the digital front doors of modern businesses, facilitating seamless and interactive communication with customers and employees. They go beyond simple data storage and retrieval, focusing on creating dynamic and personalized experiences across various channels. SOEs prioritize customer-centric approaches, ensuring businesses can deliver dynamic and interactive communication. ... Systems of Intelligence (SOIs) represent the pinnacle of data-driven decision making. Built upon the foundation of Systems of Record (SORs) and Systems of Engagement (SOEs), SOIs leverage the power of artificial intelligence (AI) and machine learning (ML) to transform raw data into actionable insights. Unlike their predecessors, SOIs go beyond simply identifying patterns and trends. They possess the ability to predict future outcomes and even prescribe optimal courses of action.


Gen AI strategies put CISOs in a stressful bind

One of the most problematic gen AI issues CISOs face is how casual many gen AI vendors are being when selecting the data used to train their models, Townsend said. “That creates a security risk for the organization.” ... generative AI’s penetration into SaaS solutions makes this more problematic. “The attack surface for gen AI has changed. It used to be enterprise users using foundation models provided by the biggest providers. Today, hundreds of SaaS applications have embedded LLMs that are in use across the enterprise,” said Routh, who today serves as chief trust officer at security vendor Saviynt. “Software engineers have more than 1 million open source LLMs at their disposal on HuggingFace.com.” ... All this can take a psychological toll on CISOs, Townsend surmised. “When they feel overwhelmed, they shut down,” he said. “They do what they feel they can, and they will ignore what they feel that they can’t control.” ... “The bad actors are feverishly working to exploit these new technologies in malicious ways, so the CISOs are right to be concerned about how these new gen AI solutions and systems can be exploited,” Taylor said. 


How Enterprises and Startups Can Master AI With Smarter Data Practices

For enterprises, however, supplying AI systems with the data they need to thrive is more complicated by several orders of magnitude. There are two main reasons for this: First, enterprises don’t have the same information aggregation ability in the consumer AI world. Consumer AI companies can use any public data on the web to train their AI models; think of it as an entire continent of information to which they have unfettered access. On the other hand, enterprise data exists within minor, disparate, and oftentimes disconnected information archipelagos. Additionally, enterprises are working with many types of data, including relational data from operational systems, decades of poorly organized folders of documents, and audio and numeric data from payroll and financial systems. Further, enterprises must contend with additional layers of regulatory complexity regarding handling personal and private data. To build impactful AI tools, an enterprise’s algorithms must be fed or trained on specific data sets that span multiple sources, including the company’s human resources, finance, customer relationship management, supply chain management, and other systems.


Yes, you should use AI coding assistants—but not like that

AI is a must for software developers, but not because it removes work. Rather, it changes how developers should work. For those who just entrust their coding to a machine, well, the results are dire. ... Use AI wrong and things get worse, not better. Stanford researcher Yegor Denisov-Blanch notes that his team has found that AI increases both the amount of code delivered and the amount of code that needs reworking, which means that “actual ‘useful delivered code’ doesn’t always increase” with AI. In short, “some people manage to be less productive with AI.” So how do you ensure you get more done with coding assistants, not less? ... Here’s the solution: If you want to use AI coding assistants, don’t use them as an excuse not to learn to code. The robots aren’t going to do it for you. The engineers who will get the most out of AI assistants are those who know software best. They’ll know when to give control to the coding assistant and how to constrain that assistance (perhaps to narrow the scope of the problem they allow it to work on). Less-experienced engineers run the risk of moving fast but then getting stuck or not recognizing the bugs that the AI has created. ... AI can’t replace good programming, because it really doesn’t do good programming.


AI Tools Amplify API Security Threats Worldwide

The financial implications of API breaches prove substantial. According to Kong's report, 55% of organizations experienced an API security incident in the past year. Among those affected, 47% reported remediation costs exceeding $100,000, while 20% faced expenses surpassing $500,000. Gartner's research underscores this urgency, highlighting that API breaches typically result in ten times more leaked data than other types of security incidents. ... While AI technologies, particularly LLMs, drive unprecedented innovation, they introduce new vulnerabilities. These advanced tools enable attackers to exploit shadow APIs, bypass traditional defenses and manipulate API traffic in unexpected ways. The survey indicates that 84% of leaders predict AI and LLMs will increase the complexity of securing APIs over the next two to three years, emphasizing the need for immediate action. Despite 92% of organizations implementing measures to secure their APIs, 40% of leaders remain skeptical about whether their investments will adequately counter AI-driven risks. The regional disparity in preparedness stands out: 13% of U.S. organizations acknowledge taking no specific measures against AI threats, compared to 4% in the U.K.


From AI Assistants to Swarms of Thousands of Collaborating AI Agents: Is Your Architecture Ready?

Agentic AI is likely to create most issues in some areas more than others. The Agentic Architecture Framework identifies seven areas that will require more support in the forms of new or updated frameworks, tools and techniques to support Agentic AI capability-building and architecture development. ... Agentic AI Strategy begins with defining a clear target state across the Agentic AI maturity dimensions and levels. This step establishes the organization’s AI aspirations and provides a benchmark for future transformation. Once the target state is identified, the next step involves conducting a GAP analysis to determine the differences between current capabilities in the previous step, and the organization’s ambition. With these gaps clarified, organizations can then focus on identifying and quantifying high-impact AI use cases that align with business objectives and support progression toward the target state. ... The Agentic AI Operating Model defines how AI systems, people, and processes work together to deliver value. It focuses on integrating AI into the organization’s core operations, ensuring that AI agents operate seamlessly within new and existing workflows and alongside human teams.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him." -- W. A. Nance

Daily Tech Digest - January 13, 2025

Artificial intelligence is optimising the entire M&A lifecycle by providing data-driven insights at every stage to enable informed decisions. Companies considering a merger or acquisition can use AI to understand market trends, performance of past deals, and other events of relevance to decide the way forward. On the potential candidates, big data, analytics and AI algorithms help process vast corporate information from a variety of sources – financial statements, analyst briefings, media reports, and more– to identify acquisition targets meeting their requirements. AI augment the experts in due diligence performing complex financial modelling or reviewing extensive legal documents, conduct risk analysis with higher accuracy at a fraction of the time, compared to existing methods. ... For the legacy enterprise system, at times replacing with a cloud-based solution, organisations can become operational within six to fourteen months, depending on size, which is much faster than the time taken in a traditional on-premise scenario. ... Differences in the merging companies’ technology architectures, tools and configurations, make it extremely challenging to ascertain M&A security posture accurately, completely, and on time, even if the organisations are already on the same cloud.


Time for a change: Elevating developers’ security skills

With detection and remediation tools trivializing code security in the same environments they trained with, it’s not unreasonable to think that junior engineers could maintain the ability to perform this basic task as well as maintain an understanding of the risks and consequences of the vulnerabilities they create as they draft code. For mid-level engineers, given the increased security proficiency earlier in their careers, it can now be expected that it’s their responsibility to necessitate code security with their engineers, before it is even reviewed by senior developers. ... For this effort, developers get a pretty substantial boost to their skill set with this deepened security knowledge, which can be very valuable given the current state of affairs for hiring cybersecurity professionals with a dearth of talent available, growing backlogs, and increasing cybersecurity risks in number and scope. Most importantly, they can achieve it without sacrificing productivity – detecting and remediating vulnerabilities can be done as easily as spellcheck finds spelling errors, and training can be short and tailored to what they’re working on, all within the integrated development environment (IDE) they work in every day. ... In addition, organizations can finally achieve the vision of true shift-left by integrating security into every level of the SDLC and adopt the culture of security they’ve rightly been clamoring for.


How Your Digital Footprint Fuels Cyberattacks — and What to Do About It

If you are like most of us, you have been using digital services for years not realizing that you have been giving hackers access to the details of your personal life. On social media, we voluntarily share PII about who we are and where we are, using the location check-in features. ... Reducing your digital footprint doesn’t have to mean going off the grid. Here are some practical steps you can take — Use separate emails for different accounts: Don’t rely on one email for everything. This minimizes the damage if one account is hacked — it won’t lead hackers to all your other services. Review privacy settings regularly: Many apps have default settings that overshare your information. For instance, on apps like Strava or Telegram, you can turn off location tracking and limit who can contact you or add you to conversations. A quick check of these settings can significantly reduce your exposure. Avoid saving passwords in web browsers: Browsers prioritize convenience, not security. Instead, use a password manager. These tools securely store your passwords and can generate strong, unique ones for each account. This reduces the risk of malware or phishing attacks stealing your credentials directly from your browser. Think before you post: Share less on social media, especially in real time. This will make you harder to track and target.


What is career catfishing, the Gen Z strategy to irk ghosting corporates?

After slogging through the exhausting process of job hunting — submitting countless applications, enduring endless rounds of interviews, and anxiously waiting for updates from unresponsive hiring managers — Gen Z workers have found a way to reclaim the balance of power. The rising trend, dubbed “career catfishing,” involves Gen Zs (those aged 27 and under) accepting job offers only to never show up on their first day. According to a survey by CV Genius, which polled 1,000 UK employees across generations, approximately 34 per cent of Zoomers admitted to engaging in career catfishing. ... Gen Z alone cannot shoulder the blame for the rise of such behaviours. Office ghosting — where one party cuts off communication without notice — is now a common phenomenon. ... Managers and owners identified entitlement, motivation, lack of effort, and productivity as reasons for terminating Gen Z employees. Some even referred to them as the snowflake generation and claimed they were too easily offended, which further justified their dismissal. The practice of career catfishing could further reinforce these stereotypes, making it even harder for young professionals to build trust with potential employers.


The next AI wave — agents — should come with warning labels

AI agents that use unclean data can introduce errors, inconsistencies, or missing values that make it difficult for the model to make accurate predictions or decisions. If the dataset has missing values for certain features, for instance, the model might incorrectly assume relationships or fail to generalize well to new data. An agent could also draw data from individuals without consent or use data that’s not anonymized properly, potentially exposing personally identifiable information. Large datasets with missing or poorly formatted data can also slow model training and cause it to consume more resources, making it difficult to scale the system. In addition, while AI agents must also comply with the European Union’s AI Act and similar regulations, innovation will quickly outpace those rules. Businesses must not only ensure compliance but also manage various risks, such as misrepresentation, policy overrides, misinterpretation, and unexpected behavior. “These risks will influence AI adoption, as companies must assess their risk tolerance and invest in proper monitoring and oversight,” according to a Forrester Research report — “The State Of AI Agents” — published in October. 


Euro-cloud Anexia moves 12,000 VMs off VMware to homebrew KVM platform

“We used to pay for VMware software one month in arrears,” he said. “With Broadcom we had to pay a year in advance with a two-year contract.” That arrangement, the CEO said, would have created extreme stress on company cashflow. “We would not be able to compete with the market,” he said. “We had customers on contracts, and they would not pay for a price increase.” Windbichler considered legal action, but felt the fight would have been slow and expensive. Anexia therefore resolved to migrate, a choice made easier by its ownership of another hosting business called Netcup that ran on a KVM-based platform. Another factor in the company’s favour was that it disguised the fact it ran VMware with an abstraction layer it called “Anexia Engine” that meant customers never saw Virtzilla’s wares and instead worked in a different interface to manage their VM fleets. ... The CEO thinks more companies will move from VMware. “I do not believe Broadcom will be successful,” he told The Register. “They lost all the trust. I have talked to so many VMware customers and they say they cannot work with a company like that.” Regulators are also interested in Broadcom’s practices, he said.


Preparing for AI regulation: The EU AI Act

Among the uses of AI that are banned under Article 5 are AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. Article 5 also prohibits the use of AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age, disability, or a specific social or economic situation. Systems that analyse social behaviours and then use this information in a detrimental way are also prohibited under Article 5 if their use goes beyond the original intent of the data collection. Other areas covered by Article 5 include the use of AI systems in law enforcement and biometrics. Industry observers describe the act as a “risk-based” approach to regulating artificial intelligence. ... Organisations operating in the EU will need to take into account CSRD. Given the power-hungry nature of machine learning and AI inference, the extent to which AI is used may well be influenced by such regulations going forward. While it builds on existing regulations, as Mélanie Gornet and Winston Maxwell note in the Hal Open Science paper The European approach to regulating AI through technical standards, the AI Act takes a different route from these. Their observation is that the EU AI Act draws inspiration from European product safety rules.


Enterprise Data Architecture: A Decade of Transformation and Innovation

Privacy and compliance drive architectural decisions. The One Identity Graph we developed manages complex customer relationships while ensuring CCPA and GDPR compliance. This graph-based solution has prevented data breaches and reduced regulatory risks by implementing automated data lineage tracking, consent management, and real-time data masking. These features reinforce customer trust through transparent data handling and granular access controls. The business impact proves substantial. The platform’s real-time fraud detection analyzes transaction patterns across multiple channels, preventing fraudulent activities before completion. It optimizes inventory dynamically across thousands of locations by simultaneously processing point-of-sale data, supply chain updates, and external market factors. Supply chain disruptions trigger immediate alerts through a sophisticated event correlation engine, enabling preventive action before customer impact. Edge computing represents the next frontier. Processing data closer to its source minimizes latency, critical for IoT applications and real-time decisions. Our implementation reduces data transfer costs by 40% while improving response times for customer-facing applications. 


AI is set to transform education — what enterprise leaders can learn from this development

While AI tools show immense promise in addressing resource constraints, their adoption raises broader questions about the role of human connection in learning. Which brings us back to Unbound Academy. Students will spend two hours online each school morning working through AI-driven lessons in math, reading, and science. Tools like Khanmigo and IXL will personalize the instruction and analyze progress, adjusting the difficulty and content in real-time to optimize learning outcomes. The Charter application asserts that “this ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.” Unbound Academy’s model significantly reduces the role of human teachers. Instead, human “guides” provide emotional support and motivation while also leading workshops on life skills. What will students lose by spending most of their learning time with AI instead of human instructors, and how might this model reshape the teaching profession? The Unbound Academy model is already used in several private schools and the results they have obtained are used to substantiate the advantages it claims. ... For any of this to happen, the industry needs action that matches the rhetoric.


6 ways continuous learning can advance your career

Joys said thinking critically is about learning how a new idea or innovation might be translated into the current organizational context. "At the end of the day, the company is writing a paycheck for you," he said. "Think about how new stuff provides business value." Joys said professionals also need to ensure the benefits of the things they introduce through their learning processes are tracked and traced. "That's about measuring those efforts to ensure you can say, 'Here's a new piece of technology. Here's how we'll measure how this technology lines up with our corporate strategy and vision.'" ... Worsley told ZDNET he likes to learn on the job rather than acquire new knowledge in the classroom. "I'm not a bookish person. I don't go out and read. I recognize that I need to learn specific things because I've got a problem to solve," he said. "I'll learn about it, get the right people talking, and get the solutions underway. Tell me something's impossible and I'll tell you it's not." ... Keith Woolley, chief digital and information officer at the University of Bristol, said the great thing about his job is that it's like a hobby. "I'm naturally interested in what I do. So, I read things around me without realizing I'm consuming other information," he said. "If you're excited about what you do, learning comes naturally because it's a genuine interest. Then learning happens when you don't expect it."



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer

Daily Tech Digest - January 12, 2025

Data Architecture Trends in 2025

While unstructured data makes up the lion’s share of data in most companies (typically about 80%), structured data does its part to bulk up business’ storage needs. Sixty-four percent of organizations manage at least one petabyte of data, and 41% of organizations have at least 500 petabytes of data, according to the AI & Information Management Report. By 2028, global data creation is projected to grow to more than 394 zettabytes – and clearly enterprises will have more than their fair share of that. Time to open the door to the data lakehouse, which combines the capabilities of data lakes and data warehouses, simplifying data architecture and analytics with unified storage and processing of structured, unstructured, and semi-structured data. “Businesses are increasingly investing in data lakehouses to stay competitive,” according to MarketResearch, which sees the market growing at a 22.9% CAGR to more than $66 billion by 2033. ... “Through 2026, two-thirds of enterprises will invest in initiatives to improve trust in data through automated data observability tools addressing the detection, resolution, and prevention of data reliability issues,” according to Matt Aslett.


How Does a vCISO Leverage AI?

CISOs design and inform policy that shapes security at a company. They inform the priorities of their organizations’ cyberdefense deployment and design, develop, or otherwise acquire the tools needed to achieve the goals they set up. They implement tools and protections, monitor effectiveness, make adjustments, and generally ensure that security functions as desired. However, all that responsibility comes at immense costs, and CISOs are in high demand. It can be challenging to recruit and retain top-level talent for the role, and many smaller or growing organizations—and even some larger older ones—do not employ a traditional, full-time CISO. Instead, they often turn to vCISOs. This is far from a compromise, as vCISOs offer all of the same functionality as their traditional counterparts through an entire team of dedicated service providers rather than a single employee. Since vCISOs are available on a fractional basis, organizations only pay for specific services they need. ... As with all technological breakthroughs, AI is not without its risks and drawbacks. Thankfully, working with a vCISO allows organizations to take advantage of all the benefits of AI while also minimizing its potential downsides. A capable vCISO team doesn’t use AI or any other tool just for the sake of novelty or appearances; their choices are always strategic and risk-informed.


The Transformative Benefits of Enterprise Architecture

Enterprise Architecture review or development is essential for managing complexity, particularly when changes involve multiple systems with intricate interdependencies. ... Enterprise Architecture provides a structured approach to handle these complexities effectively. Often, key stakeholders, such as department heads, project managers, or IT leaders, identify areas of change required to meet new business goals. For example, an IT leader may highlight the need for system upgrades to support a new product launch or a department head might identify process inefficiencies impacting customer satisfaction. These stakeholders are integral to the change process, and the role of the architect is to: Identify and refine the requirements of the stakeholders; Develop architectural views that address concerns and requirements; Highlight trade-offs needed to reconcile conflicting concerns among stakeholders. Without Enterprise Architecture, it is highly unlikely that all stakeholder concerns and requirements will be comprehensively addressed. This can lead to missed opportunities, unanticipated risks, and inefficiencies, such as misaligned systems, redundant processes, or overlooked security vulnerabilities, all of which can undermine business goals and stakeholder trust.


Listen to your technology users — they have led to the most disruptive innovations in history

First, create a culture of open innovation that values insights from outside the organization. While the technical geniuses in your R&D department are experts in how to build something new, they aren’t the only authorities on what it is you should build. Our research suggests that it’s especially important to seek out user-generated disruption at times when customer needs are changing rapidly. Talk to your customers and create channels for dialogue and engagement. Most companies regularly survey users and conduct focus groups. But to identify truly disruptive ideas, you need to go beyond reactions to existing products and plumb unmet needs and pain points. Customer complaints also offer insight into how existing solutions fall short. AI tools make it easier to monitor user communities online and analyze customer feedback, reviews, and complaints. Keep your pulse on social media and online user communities where people share innovative ways to adapt existing products and wish lists for new functionalities. ... Lastly, explore co-creation initiatives that foster direct collaboration with user innovators. For instance, run a contest where customers submit ideas for new products or features, some of which could turn out to be truly disruptive. Or sponsor hackathons that bring together users with needs and technical experts to design solutions.


Guide to Data Observability

Data observability is critical for modern data operations because it ensures systems are running efficiently, detecting anomalies, finding root causes, and actively addressing data issues before they can impact business outcomes. Unlike traditional monitoring, which focuses only on system health or performance metrics, observability provides insights into why something is wrong and allows teams to understand their systems in a more efficient way. In the digital age, where companies rely heavily on data-driven decisions, data observability isn’t only an operational concern but a critical business function. ... When we talk about data observability, we’re focusing on monitoring the data that flows through systems. This includes ensuring data integrity, reliability, and freshness across the lifecycle of the data. It’s distinct from database observability, which focuses more on the health and performance of the databases themselves. ... On the other hand, database observability is specifically concerned with monitoring the performance, health, and operations of a database system—for example, an SQL or MongoDB server. This includes monitoring query performance, connection pools, memory usage, disk I/O, and other technical aspects, ensuring the database is running optimally and serving requests efficiently.


Data maturity and the squeezed middle – the challenge of going from good to great

Breaking through this stagnation does not require a complete overhaul. Instead, businesses can take small but decisive steps. First, they must shift their mindset from seeing data collection as an end in itself, to viewing it as a tool for creating meaningful customer interactions. This means moving beyond static metrics and broad segmentations to dynamic, real-time personalisation. The use of artificial intelligence (AI) can be transformative in this regard. Modern AI tools can analyse customer behaviour in real time, enabling businesses to respond with tailored content, promotions, and experiences. For instance, rather than relying on broad-brush email campaigns, companies can use AI-driven insights to craft (truly) hyper-personalised messages based on individual customer journeys. Such efforts not only improve conversion rates, but also build deeper customer loyalty. ... It’s important to never lose sight of the fact that data maturity is about people and culture as much as tech. Organisations need to foster a culture that values experimentation, learning, and continuous improvement. Behaviourally, this can be uncomfortable for slow-moving or cautious businesses and requires breaking down silos and encouraging cross-functional collaboration. 


Finding a Delicate Balance with AI Regulation and Innovation

The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes, and when errors are still made, transparency will help rectify the situation. It is also essential that regulation tries to prevent AI from being used for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult. The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology ones it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet. The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken.


Quantum Machine Learning for Large-Scale Data-Intensive Applications

Quantum machine learning (QML) represents a novel interdisciplinary field that merges principles of quantum computing with machine learning techniques. The foundation of quantum computing lies in the principles of quantum mechanics, which govern the behavior of subatomic particles and introduce phenomena such as superposition and entanglement. These quantum properties enable quantum computers to perform computations probabilistically, offering potential advantages over classical systems in specific computational tasks ... Integrating quantum machine learning (QML) with traditional machine learning (ML) models is an area of active research, aiming to leverage the advantages of both quantum and classical systems. One of the primary challenges in this integration is the necessity for seamless interaction between quantum algorithms and existing classical infrastructure, which currently dominates the ML landscape. Despite the resource-intensive nature of classical machine learning, which necessitates high-speed computer hardware to train state-of-the-art models, researchers are increasingly exploring the potential benefits of quantum computing to optimize and expedite these processes.


Generative Architecture Twins (GAT): The Next Frontier of LLM-Driven Enterprise Architecture

A Generative Architecture Twin (GAT) is a virtual, LLM-coordinated environment that mirrors — and continuously evolves with — your actual production architecture. ... Despite the challenges, Generative Architecture Twins represent an ambitious leap forward. They propose a world where:Architectural decisions are no longer static but evolve with real-time feedback loops. Compliance, security, and performance are integrated from day one rather than tacked on later. EA documentation isn’t a dusty PDF but a living blueprint that changes as the system scales. Enterprises can experiment with high-risk changes in a safe, cost-controlled manner, guided by autonomous AI that learns from every iteration. As we refine these concepts, expect to see the first prototypes of GAT in innovative startups or advanced R&D divisions of large tech enterprises. A decade from now, GAT may well be as ubiquitous as DevOps pipelines are today. Generative Architecture Twins (GAT) go beyond today’s piecemeal LLM usage and envision a closed-loop, AI-driven approach to continuous architectural design and validation. By combining digital twins, neuro-symbolic reasoning, and ephemeral simulation environments, GAT addresses long-standing EA challenges like stale documentation, repetitive compliance overhead, and costly rework.


Is 2025 the year of (less cloud) on-premises IT?

For an external view here outside of OWC, Vadim Tkachenko, technology fellow and co-founder at Percona thinks that whether or not we’ll see a massive wave of data repatriation take place in 2025 is still hard to say. “However, I am confident that it will almost certainly mark a turning point for the trend. Yes, people have been talking about repatriation off and on and in various contexts for quite some time. I firmly believe that we are facing a real inflection point for repatriation where the right combination of factors will come together to nudge organisations towards bringing their data back in-house to either on-premises or private cloud environments which they control, rather than public cloud or as-a-Service options,” he said. Tkachenko further states that companies across the private sector (and tech in particular) are tightening their purse strings considerably. “We’re also seeing more work on enhanced usability, ease of deployment, and of course, automation. The easier it becomes to deploy and manage databases on your own, the more organizations will have the confidence and capabilities needed to reclaim their data and a sizeable chunk of their budgets,” said the Percona man. It turns out then, cloud is still here and on-premises is still here and… actually, a hybrid world is typically the most prudent route to go down.



Quote for the day:

"The greatest leaders mobilize others by coalescing people around a shared vision." -- Ken Blanchard

Daily Techj Digest - January 11, 2025

Managing Third-Party Risks in the Software Supply Chain

The myriad of third party risks such as, compromised or faulty software updates, insecure hardware or software components and insufficient security practices, expand the attack surface of the organization. A security breach in one such third party entity can ripple through and potentially lead to significant operational disruptions, financial losses and reputational damage to the organization. In view of this, securing not just their own organizations, but also the intricate web of suppliers, vendors and partners that make up their cyber supply chain is not just an option, but a necessity. It is needless to state that managing the third party risks is becoming a big challenge for the Chief Information Security Officers. More to it, it may not just be enough to maanage third-party risks but also fourth party risks as well. ... Mapping your most critical third-party relationships can identify weak links across your extended enterprise. But to be effective, it needs to go beyond third parties. In many cases, risks are often buried within complex subcontracting arrangements and other relationships, within both your supply chain and vendor partnerships. Illuminating your extended network to see beyond third parties is critical to assessing, mitigating and monitoring the risks posed by sub-tier suppliers.


6G, AI and Quantum: Shaping the Future of Connectivity, Computing and Security

Beyond 6G, another transformative technology that will reshape industries in 2025 is quantum computing. This isn’t just about faster processing; it’s about tackling problems that are currently intractable for even the most powerful conventional systems. Think of the implications for AI training itself – imagine feeding massive, complex datasets into quantum-powered algorithms. The potential for breakthroughs in AI research and development is immense. This next-gen computational power is expected to solve complex problems that were previously deemed unsolvable, ushering in a new era of innovation and efficiency. The impact of these developments will be felt in a range of industries such as pharmaceuticals, cryptography and supply chains. For instance, in the pharmaceutical sector, quantum computing is set to speed up drug discovery. ... The rise of distributed cloud models and edge computing will also speed up services and provide value and innovation – placing cloud technology at the centre of every organisation’s strategic roadmap. Leveraging cloud infrastructure allows businesses to rapidly scale AI models, process enormous volumes of data in real-time, and generate actionable insights that facilitate intelligent decision-making. 


Advancing Platform Accountability: The Promise and Perils of DSA Risk Assessments

Multiple risk assessments fail to meaningfully consider risks related to problematic and harmful use and the design or functioning of their service and systems. Facebook’s 2024 risk assessment assesses physical and mental wellbeing in a crosscutting way but does not meaningfully consider risks related to excessive use or addiction. Other assessments more centrally consider physical and mental well-being risks. ... Snap’s risk assessment devotes seven pages to physical and mental well-being risks, but the assessment fails to consider how platform design could contribute to physical and mental well-being risks by incentivizing problematic or harmful use. Snap’s assessment is broadly focused on risks related to harmful content. The assessment describes mitigations to reduce the prevalence of such content that could impact physical and mental well-being – including auto-moderating for abusive content or ensuring recommender systems do not recommend violative content. This, of course, is important. However, the risk assessment and review of mitigations place almost no emphasis on risks of excessive use actually driven by Snap’s design. Snap’s focus on ephemeral content is presented as only a benefit – “conversations on Snapchat delete by default to reflect real-life conversations.”


Hard and Soft Skills Go Hand-in-Hand — These Are the Ones You Need to Sharpen This Year

To most effectively harness the power of AI in 2025, leaders need to understand it. DataCamp's Matt Crabtree describes AI literacy, at its most basic, as having the skills and competencies required to use AI technologies and applications effectively. But it's much more than that: Crabtree points out that AI literacy is also about enabling people to make informed decisions about how they're using AI, understand the implications of those uses and navigate the ethical considerations they present. For leaders, that means understanding biases that remain embedded in AI systems, privacy concerns, and the need for transparency and accountability. Say you're looking to integrate AI into your hiring process, as we have at my company, Jotform. It's important to understand that while it can be used for tasks like scheduling interviews, screening resumes for objective criteria or helping to organize candidate information, it should not be making hiring decisions for you. AI still has a significant bias problem, in addition to the many other ways in which it lacks the soft skills required for certain, human-only tasks. AI literacy is about understanding its shortcomings and navigating them in a way that is fair and equitable.


The Tech Blanket: Building a Seamless Tech Ecosystem

The days of disconnected platforms are over. In 2025, businesses will embrace platform interoperability to ensure that knowledge and data flow seamlessly across departments. Think of your organization’s technology as a woven blanket—each tool and system represents a thread that, when tightly interwoven, creates a strong, cohesive layer of support that covers your entire company. ... Building a seamless ecosystem begins with establishing a framework for managing distributed information. By creating a Knowledge Asset Center of Excellence, organizations can define norms for how data and knowledge are shared and governed. This approach fosters collaboration while allowing teams the flexibility to work in ways that suit their unique needs. ... As platforms become more interconnected, ensuring robust security becomes critical. Data breaches or inaccuracies in one tool can ripple across the ecosystem, creating significant risks. Leaders must prioritize tools with advanced security features, such as encryption and role-based access controls, to protect sensitive information while maintaining seamless interoperability. Strong data governance policies are also essential. By continuously monitoring data flow and usage, organizations can safeguard the integrity of their knowledge assets while promoting responsible collaboration.


WebAssembly and Containers’ Love Affair on Kubernetes

WebAssembly is showing promise on Kubernetes thanks to the fact that WebAssembly now meets the OCI registry standard as OCI artifacts. This enables Wasm to meet the Kubernetes standard and the OCI standard for containerization, specifically the OCI artifact format. It also involves compatibility with Kubernetes pods, storage interfaces and more. In that respect, it’s one step toward using Wasm as an alternative to containers. Additionally, through containerd, WebAssembly components can be distributed side by side with containers in Kubernetes environments. Zhou likened this to a drop-in replacement for the unit’s containers, integrating with tools such as Istio, Dapr and OpenTelemetry Collector. ... When running applications through WebAssembly as sidecars in a cluster, the two main challenges involve distribution and deployment, as Zhou outlined. A naive approach bundles the Wasm runtime into a container, but a better method offloads the Wasm runtime into the shim process in containerd. This approach allows Kubernetes orchestration of Wasm workloads. The OCI artifact format for WebAssembly, enabling Wasm components to use the same distribution mechanisms as containers, is responsible for the distribution part, Zhou said.


Training Employees for the Future with Digital Humans

Digital humans leverage a host of advanced technologies, large language models, retrieval-augmented generation, and intelligent AI orchestrators, among them. They also use unique techniques like kinesthetic learning, or “learning by doing,” alongside on-screen visuals to better illustrate more complicated topics. Note that digital humans are not like traditional chatbots that follow structured dialog trees. Instead, they can respond dynamically to the employee's inputs to ensure interactions are as lifelike as possible. ... By allowing employees to apply their training in real-world scenarios, digital humans help them keep more information in a shorter amount of time, reducing traditional training timelines significantly. As a result, businesses will spend less money and time reskilling personnel. The training possibilities with digital humans are vast, helping employees learn to use new technologies and systems. In a sales setting, personnel can practice using new generative AI-powered customer service tools while a digital human pretends to be a customer. Digital humans could also help engineers in the automotive space learn how to use machine-learning solutions or operate 3D printing machines.


From Silos to Synergy: Transforming Threat Intelligence Sharing in 2025

Put simply, organizations must break down the silos between ALL teams involved in security. This is not just about understanding the organization’s cyber hygiene, but it is also about understanding the layers that an attacker would have to get through to exploit and conduct potentially nefarious activities within the business. Once this insight is gained this enables teams to work through requirements and align the CTI program for specific stakeholders. This means that both offense and defense teams are working together, mapping out the attack path and gaining a better understanding of defense. Doing this will provide a better understanding of offense as teams scout to look at what could be effective, going to the next layer to consider what might be vulnerable and whether they have mitigating controls in place to provide any additional prevention. ... In the past, teams working on-site together would document their work on a whiteboard. Now, with the advent of remote working, there are fewer opportunities to share in person, and a plethora of communication channels that lead to knowledge fragmentation as different people use different tools such as Slack or other messaging platforms, or would just share intelligence one-on-one.


Explained: The Multifaceted Nature of Digital Twins

Beyond operational improvements, digital twins also drive innovation at scale. Large enterprises with multiple R&D hubs can test new designs or processes in a virtual environment before deploying them globally. For example, an automotive company developing an electric vehicle can simulate how it will perform under different driving conditions, regulatory frameworks and consumer preferences in diverse markets - all within a digital twin. ... Building and maintaining a digital twin requires significant investment in IoT infrastructure, cloud computing, AI and skilled personnel. For many companies, particularly small and medium-sized enterprises, these costs can be prohibitive. A McKinsey study highlights that digital maturity - the ability to effectively integrate and utilize advanced technologies - is often a key barrier. Seventy-five percent of companies that have adopted digital-twin technologies are those that have achieved at least medium levels of complexity. Large enterprises can justify the cost of digital twins by applying them across multiple facilities or product lines, but for smaller companies, the benefits may not scale as effectively, making it harder to achieve a return on investment.


Design Patterns for Building Resilient Systems

You may have some parts of your system that are degrading performance and may be affecting cascading failures everywhere. So that means that when your client requests a specific part that’s working fine, it’s great, but you want to stop immediately what’s causing the fire. That way, you have different load balancing rules that I’ve defined here to say, okay, this part of our system is degrading performance; it’s starting to affect everything else, and it’s cascading failures. We’re just going to stop it so you can’t even make a request to this route because it’s the one causing all the issues. Having your clients handle that failure to that request gracefully can be incredibly important because then the rest of your system can still work. Maybe some particular routes you’re defining aren’t going to work; some parts of your system will just be unavailable, but it’s not taking down the entire thing. Ultimately, what I’m talking about there is bulkheads. ... Now, while the CrowdStrike incident didn’t directly affect me, it sure did indirectly because I knew about it right away from the alarms based on metrics. When used correctly within context, design patterns allow you to build a resilient system. Now, everything we had in place for resilience helped; they worked. But as always, when something like this happens, it makes you re-evaluate specific individual contexts. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - January 10, 2025

Meta puts the ‘Dead Internet Theory’ into practice

In the old days, when Meta was called Facebook, the company wrapped every new initiative in the warm metaphorical blanket of “human connection”—connecting people to each other. Now, it appears Meta wants users to engage with anyone or anything—real or fake doesn’t matter, as long as they’re “engaging,” which is to say spending time on the platforms and money on the advertised products and services. In other words, Meta has so many users that the only way to continue its previous rapid growth is to build users out of AI. The good news is that Meta’s “Dead Internet” projects are not going well. ... Meta is testing a program called “Creator AI,” which enables influencers to create AI-generated bot versions of themselves. These bots would be designed to look, act, sound, and write like the influencers who made them, and would be trained on the wording of their posts. The influencer bots would engage in interactive direct messages and respond to comments on posts, fueling the unhealthy parasocial relationships millions already have with celebrities and influencers on Meta platforms. The other “benefit” is that the influencers could “outsource” fan engagement to a bot. ... “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Connor Hayes, vice president of product for generative AI at Meta, said


Experts Highlight Flaws within Government’s Data Request Mandate Under DPDP Rules 2025

Tech Lawyer Varun Sen Bahl also points out the absence of an appellate mechanism for such ‘calls for information’ by the Central government, explaining that such an appeal process only extends against orders of the Data Protection Board. He explains, “This is problematic because it leaves Data Fiduciaries and Data Principals with no clear recourse against excessive data collection requests made under Section 36 read with Rule 22“. Bahl also notes that the provision lacks specific mention of guardrails like the European Union’s data minimisation principle under the General Data Protection Regulation (GDPR) while furnishing such information requests. ... Roy argues that the compliance burdens on Data Fiduciaries will increase and aggravate through sweeping requests and by invoking the non-disclosure clause. To explain, he cites the case of the Razorpay-AltNews situation in 2022, when the Government accessed the names and transaction details of the news platform’s donors via Razorpay ... To ensure that government officers and agencies don’t abuse this provision, Roy explains that “Fiduciaries must [as part of corporate governance] give periodic reports of the number of such demands”. Similarly, law enforcement and other agencies should also submit periodic reports of such requests to the Data Protection Board comprising details of cases where the non-disclosure clause is invoked.


How Edge Computing can Give OEMs a Competitive Advantage

Latency matters in warehouse automation too. Performing predictive maintenance on a shoe sorter, for example, could require real-time monitoring of actuators that do diversions every 40 milliseconds. Component-level computing power allows the system to respond to changing conditions with speed and efficiency levels that simply wouldn’t be possible with a cloud-based system. ... Edge components can also communicate with a system’s programmable logic controllers (PLCs), making their data immediately available to end users. Supporting software on the customer’s local network interprets this information, enabling predictive maintenance and other real-time insights while tracking historical trends over time. ... Edge technology enables you to build assets that deliver higher utilization to your customers. Much of this benefit comes from the greater efficiencies of predictive maintenance. Users have less downtime because unnecessary service is reduced or eliminated, and many problems can be resolved before they cause unplanned shutdowns. Smart components can also deliver more process consistency. Ordinarily, parts degrade over time, gradually losing speed and/or power. With edge capabilities, they can continuously adapt to changing conditions, including varying parcel weights and normal wear.


Have we reached the end of ‘too expensive’ for enterprise software?

LLMs are now changing the way companies approach problems that are difficult or impossible to solve algorithmically, although the term “language” in Large Language Models is misleading. ... GenAI enables a variety of features that were previously too complex, too expensive, or completely out of reach for most organizations because they required investments in customized ML solutions or complex algorithms. ... Companies need to recognize generative AI for what it is: a general-purpose technology that touches everything. It will become part of the standard software development stack, as well as an integral enabler of new or existing features. Ensuring the future viability of your software development requires not only acquiring AI tools for software development but also preparing infrastructure, design patterns and operations for the growing influence of AI. As this happens, the role of software architects, developers, and product designers will also evolve. They will need to develop new skills and strategies for designing AI features, handling non-deterministic outputs, and integrating seamlessly with various enterprise systems. Soft skills and collaboration between technical and non-technical roles will become more important than ever, as pure hard skills become cheaper and more automatable.


Is prompt engineering a 'fad' hindering AI progress?

Motivated by the belief that "a well-crafted prompt is essential for obtaining accurate and relevant outputs from LLMs," aggressive AI users -- such as ride-sharing service Uber -- have created whole disciplines around the topic. And yet, there is a reasoned argument to be made that prompts are the wrong interface for most users of gen AI, including experts. "It is my professional opinion that prompting is a poor user interface for generative AI systems, which should be phased out as quickly as possible," writes Meredith Ringel Morris, principal scientist for Human-AI Interaction for Google's DeepMind research unit, in the December issue of computer science journal Communications of the ACM. Prompts are not really "natural language interfaces," Morris points out. They are "pseudo" natural language, in that much of what makes them work is unnatural. ... In place of prompting, Morris suggests a variety of approaches. These include more constrained user interfaces with familiar buttons to give average users predictable results; "true" natural language interfaces; or a variety of other "high-bandwidth" approaches such as "gesture interfaces, affective interfaces (that is, mediated by emotional states), direct-manipulation interfaces


Building Resilience Into Cyber-Physical Systems Has Never Been This Mission-Critical

In our quest for cyber resilience, we sometimes—mistakenly—fixate on hypothetical doomsday scenarios. While this apocalyptic and fear-based thinking can be an instinctual response to the threats we face, it is not realistic or helpful. Instead, we must champion the progress, even incremental, that is achievable through focused, pragmatic measures—like cyber insurance. By reframing discussions around tangible outcomes such as financial stability and public safety, we can cultivate a clearer sense of priorities. Regulatory frameworks may eventually align incentives towards better cybersecurity practices, but in the interim, transferring risk via a measure like cyber insurance offers a potent mechanism to enhance visibility into risk mitigation strategies and implement better cyber hygiene accordingly. By quantifying potential losses and incentivizing proactive security measures, cyber insurance can catalyze a necessary, and overdue cultural shift towards resilience-oriented practices—and a safer world. We stand at a pivotal moment in American critical infrastructure cybersecurity. As hackers threaten to sabotage our vital systems for ransom, the financial damages ensued from incidents like Halliburton oblige us to stay alert and act proactively. 


Don't Fall Into the 'Microservices Are Cool' Trap and Know When to Stick to Monolith Instead

Over time, as monolith applications become less and less maintainable, some teams decide that the only way to solve the problem is to start refactoring by breaking their application into microservices. Other teams make this decision just because "microservices are cool." This process takes a lot of time and sometimes brings even more maintenance overhead. Before going into this, it's crucial to carefully consider all the pros and cons and ensure you've reached your current monolith architecture limits. And remember, it is easier to break than to build. ... As you see, the modular monolith is the way to get the best from both worlds. It is like running independent microservices inside a single monolith but avoiding collateral microservices overhead. One of the limitations you may have – is not being able to scale different modules independently. You will have as many monolith instances as required by the most loaded module, which may lead to excessive resource consumption. The other drawback is the limitations of using different technologies. ... When running a monolith application, you can usually maintain a simpler infrastructure. Options like virtual machines or PaaS solutions (such as AWS EC2) will suffice. Also, you can handle much of the scaling, configuration, upgrades, and monitoring manually or with simple tools. 


SEC rule confusion continues to put CISOs in a bind a year after a major revision

“There is so much fear out there right now because there is a lack of clarity,” Sullivan told CSO. “The government is regulating through enforcement actions, and we get incomplete information about each case, which leads to rampant speculation.” As things stand, CISOs and their colleagues must chart a tricky course in meeting reporting requirements in the event of a cyber security incident or breach, Shusko says. That means anticipating the need to deal with reporting requirements by making compliance preparation part of any incident response plan, Shusko says. If they must make a cyber incident disclosure, companies should attempt to be compliant and forthcoming while seeking to avoid releasing information that could inadvertently point towards unresolved security shortcomings that future attackers might be able to exploit. ... Given that clarity around disclosure isn’t always straightforward, there is no real substitute for preparedness, and that makes it essential to practise situations that would require disclosure through tabletops and other exercises, according to Simon Edwards, chief exec of security testing firm SE Labs. “Speaking as someone who is invested heavily in the security of my company, I’d say that the most obvious and valuable thing a CISO can do is roleplay through an incident.”


How adding capacity to a network could reduce IT costs

Have you heard the phrase “bandwidth economy of scale?” It’s a sophisticated way of saying that the cost per bit to move a lot of bits is less than it is to move a few. In the decades that information technology evolved from punched cards to PCs and mobile devices, we’ve taken advantage of this principle by concentrating traffic from the access edge inward to fast trunks. ... Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it.


Ephemeral environments in cloud-native development

An emerging trend in cloud computing is using ephemeral environments for development and testing. Ephemeral environments are temporary, isolated spaces created for specific projects. They allow developers to swiftly spin up an environment, conduct testing, and then dismantle it once the task is complete. ... At first, ephemeral environments sound ideal. The capacity for rapid provisioning aligns seamlessly with modern agile development philosophies. However, deploying these spaces is fraught with complexities that require thorough consideration before wholeheartedly embracing them. ... The initial setup and ongoing management of ephemeral environments can still incur considerable costs, especially in organizations that lack effective automation practices. If one must spend significant time and resources establishing these environments and maintaining their life cycle, the expected savings can quickly diminish. Automation isn’t merely a buzzword; it requires investment in tools, training, and sometimes a cultural shift within the organization. Many enterprises may still be tethered to operational costs that can potentially undermine the presumed benefits. This seems to be a systemic issue with cloud-native anything.



Quote for the day:

"The best leader brings out the best in those he has stewardship over." -- J. Richard Clarke