Daily Tech Digest - January 13, 2025

Artificial intelligence is optimising the entire M&A lifecycle by providing data-driven insights at every stage to enable informed decisions. Companies considering a merger or acquisition can use AI to understand market trends, performance of past deals, and other events of relevance to decide the way forward. On the potential candidates, big data, analytics and AI algorithms help process vast corporate information from a variety of sources – financial statements, analyst briefings, media reports, and more– to identify acquisition targets meeting their requirements. AI augment the experts in due diligence performing complex financial modelling or reviewing extensive legal documents, conduct risk analysis with higher accuracy at a fraction of the time, compared to existing methods. ... For the legacy enterprise system, at times replacing with a cloud-based solution, organisations can become operational within six to fourteen months, depending on size, which is much faster than the time taken in a traditional on-premise scenario. ... Differences in the merging companies’ technology architectures, tools and configurations, make it extremely challenging to ascertain M&A security posture accurately, completely, and on time, even if the organisations are already on the same cloud.


Time for a change: Elevating developers’ security skills

With detection and remediation tools trivializing code security in the same environments they trained with, it’s not unreasonable to think that junior engineers could maintain the ability to perform this basic task as well as maintain an understanding of the risks and consequences of the vulnerabilities they create as they draft code. For mid-level engineers, given the increased security proficiency earlier in their careers, it can now be expected that it’s their responsibility to necessitate code security with their engineers, before it is even reviewed by senior developers. ... For this effort, developers get a pretty substantial boost to their skill set with this deepened security knowledge, which can be very valuable given the current state of affairs for hiring cybersecurity professionals with a dearth of talent available, growing backlogs, and increasing cybersecurity risks in number and scope. Most importantly, they can achieve it without sacrificing productivity – detecting and remediating vulnerabilities can be done as easily as spellcheck finds spelling errors, and training can be short and tailored to what they’re working on, all within the integrated development environment (IDE) they work in every day. ... In addition, organizations can finally achieve the vision of true shift-left by integrating security into every level of the SDLC and adopt the culture of security they’ve rightly been clamoring for.


How Your Digital Footprint Fuels Cyberattacks — and What to Do About It

If you are like most of us, you have been using digital services for years not realizing that you have been giving hackers access to the details of your personal life. On social media, we voluntarily share PII about who we are and where we are, using the location check-in features. ... Reducing your digital footprint doesn’t have to mean going off the grid. Here are some practical steps you can take — Use separate emails for different accounts: Don’t rely on one email for everything. This minimizes the damage if one account is hacked — it won’t lead hackers to all your other services. Review privacy settings regularly: Many apps have default settings that overshare your information. For instance, on apps like Strava or Telegram, you can turn off location tracking and limit who can contact you or add you to conversations. A quick check of these settings can significantly reduce your exposure. Avoid saving passwords in web browsers: Browsers prioritize convenience, not security. Instead, use a password manager. These tools securely store your passwords and can generate strong, unique ones for each account. This reduces the risk of malware or phishing attacks stealing your credentials directly from your browser. Think before you post: Share less on social media, especially in real time. This will make you harder to track and target.


What is career catfishing, the Gen Z strategy to irk ghosting corporates?

After slogging through the exhausting process of job hunting — submitting countless applications, enduring endless rounds of interviews, and anxiously waiting for updates from unresponsive hiring managers — Gen Z workers have found a way to reclaim the balance of power. The rising trend, dubbed “career catfishing,” involves Gen Zs (those aged 27 and under) accepting job offers only to never show up on their first day. According to a survey by CV Genius, which polled 1,000 UK employees across generations, approximately 34 per cent of Zoomers admitted to engaging in career catfishing. ... Gen Z alone cannot shoulder the blame for the rise of such behaviours. Office ghosting — where one party cuts off communication without notice — is now a common phenomenon. ... Managers and owners identified entitlement, motivation, lack of effort, and productivity as reasons for terminating Gen Z employees. Some even referred to them as the snowflake generation and claimed they were too easily offended, which further justified their dismissal. The practice of career catfishing could further reinforce these stereotypes, making it even harder for young professionals to build trust with potential employers.


The next AI wave — agents — should come with warning labels

AI agents that use unclean data can introduce errors, inconsistencies, or missing values that make it difficult for the model to make accurate predictions or decisions. If the dataset has missing values for certain features, for instance, the model might incorrectly assume relationships or fail to generalize well to new data. An agent could also draw data from individuals without consent or use data that’s not anonymized properly, potentially exposing personally identifiable information. Large datasets with missing or poorly formatted data can also slow model training and cause it to consume more resources, making it difficult to scale the system. In addition, while AI agents must also comply with the European Union’s AI Act and similar regulations, innovation will quickly outpace those rules. Businesses must not only ensure compliance but also manage various risks, such as misrepresentation, policy overrides, misinterpretation, and unexpected behavior. “These risks will influence AI adoption, as companies must assess their risk tolerance and invest in proper monitoring and oversight,” according to a Forrester Research report — “The State Of AI Agents” — published in October. 


Euro-cloud Anexia moves 12,000 VMs off VMware to homebrew KVM platform

“We used to pay for VMware software one month in arrears,” he said. “With Broadcom we had to pay a year in advance with a two-year contract.” That arrangement, the CEO said, would have created extreme stress on company cashflow. “We would not be able to compete with the market,” he said. “We had customers on contracts, and they would not pay for a price increase.” Windbichler considered legal action, but felt the fight would have been slow and expensive. Anexia therefore resolved to migrate, a choice made easier by its ownership of another hosting business called Netcup that ran on a KVM-based platform. Another factor in the company’s favour was that it disguised the fact it ran VMware with an abstraction layer it called “Anexia Engine” that meant customers never saw Virtzilla’s wares and instead worked in a different interface to manage their VM fleets. ... The CEO thinks more companies will move from VMware. “I do not believe Broadcom will be successful,” he told The Register. “They lost all the trust. I have talked to so many VMware customers and they say they cannot work with a company like that.” Regulators are also interested in Broadcom’s practices, he said.


Preparing for AI regulation: The EU AI Act

Among the uses of AI that are banned under Article 5 are AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. Article 5 also prohibits the use of AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age, disability, or a specific social or economic situation. Systems that analyse social behaviours and then use this information in a detrimental way are also prohibited under Article 5 if their use goes beyond the original intent of the data collection. Other areas covered by Article 5 include the use of AI systems in law enforcement and biometrics. Industry observers describe the act as a “risk-based” approach to regulating artificial intelligence. ... Organisations operating in the EU will need to take into account CSRD. Given the power-hungry nature of machine learning and AI inference, the extent to which AI is used may well be influenced by such regulations going forward. While it builds on existing regulations, as Mélanie Gornet and Winston Maxwell note in the Hal Open Science paper The European approach to regulating AI through technical standards, the AI Act takes a different route from these. Their observation is that the EU AI Act draws inspiration from European product safety rules.


Enterprise Data Architecture: A Decade of Transformation and Innovation

Privacy and compliance drive architectural decisions. The One Identity Graph we developed manages complex customer relationships while ensuring CCPA and GDPR compliance. This graph-based solution has prevented data breaches and reduced regulatory risks by implementing automated data lineage tracking, consent management, and real-time data masking. These features reinforce customer trust through transparent data handling and granular access controls. The business impact proves substantial. The platform’s real-time fraud detection analyzes transaction patterns across multiple channels, preventing fraudulent activities before completion. It optimizes inventory dynamically across thousands of locations by simultaneously processing point-of-sale data, supply chain updates, and external market factors. Supply chain disruptions trigger immediate alerts through a sophisticated event correlation engine, enabling preventive action before customer impact. Edge computing represents the next frontier. Processing data closer to its source minimizes latency, critical for IoT applications and real-time decisions. Our implementation reduces data transfer costs by 40% while improving response times for customer-facing applications. 


AI is set to transform education — what enterprise leaders can learn from this development

While AI tools show immense promise in addressing resource constraints, their adoption raises broader questions about the role of human connection in learning. Which brings us back to Unbound Academy. Students will spend two hours online each school morning working through AI-driven lessons in math, reading, and science. Tools like Khanmigo and IXL will personalize the instruction and analyze progress, adjusting the difficulty and content in real-time to optimize learning outcomes. The Charter application asserts that “this ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.” Unbound Academy’s model significantly reduces the role of human teachers. Instead, human “guides” provide emotional support and motivation while also leading workshops on life skills. What will students lose by spending most of their learning time with AI instead of human instructors, and how might this model reshape the teaching profession? The Unbound Academy model is already used in several private schools and the results they have obtained are used to substantiate the advantages it claims. ... For any of this to happen, the industry needs action that matches the rhetoric.


6 ways continuous learning can advance your career

Joys said thinking critically is about learning how a new idea or innovation might be translated into the current organizational context. "At the end of the day, the company is writing a paycheck for you," he said. "Think about how new stuff provides business value." Joys said professionals also need to ensure the benefits of the things they introduce through their learning processes are tracked and traced. "That's about measuring those efforts to ensure you can say, 'Here's a new piece of technology. Here's how we'll measure how this technology lines up with our corporate strategy and vision.'" ... Worsley told ZDNET he likes to learn on the job rather than acquire new knowledge in the classroom. "I'm not a bookish person. I don't go out and read. I recognize that I need to learn specific things because I've got a problem to solve," he said. "I'll learn about it, get the right people talking, and get the solutions underway. Tell me something's impossible and I'll tell you it's not." ... Keith Woolley, chief digital and information officer at the University of Bristol, said the great thing about his job is that it's like a hobby. "I'm naturally interested in what I do. So, I read things around me without realizing I'm consuming other information," he said. "If you're excited about what you do, learning comes naturally because it's a genuine interest. Then learning happens when you don't expect it."



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer

Daily Tech Digest - January 12, 2025

Data Architecture Trends in 2025

While unstructured data makes up the lion’s share of data in most companies (typically about 80%), structured data does its part to bulk up business’ storage needs. Sixty-four percent of organizations manage at least one petabyte of data, and 41% of organizations have at least 500 petabytes of data, according to the AI & Information Management Report. By 2028, global data creation is projected to grow to more than 394 zettabytes – and clearly enterprises will have more than their fair share of that. Time to open the door to the data lakehouse, which combines the capabilities of data lakes and data warehouses, simplifying data architecture and analytics with unified storage and processing of structured, unstructured, and semi-structured data. “Businesses are increasingly investing in data lakehouses to stay competitive,” according to MarketResearch, which sees the market growing at a 22.9% CAGR to more than $66 billion by 2033. ... “Through 2026, two-thirds of enterprises will invest in initiatives to improve trust in data through automated data observability tools addressing the detection, resolution, and prevention of data reliability issues,” according to Matt Aslett.


How Does a vCISO Leverage AI?

CISOs design and inform policy that shapes security at a company. They inform the priorities of their organizations’ cyberdefense deployment and design, develop, or otherwise acquire the tools needed to achieve the goals they set up. They implement tools and protections, monitor effectiveness, make adjustments, and generally ensure that security functions as desired. However, all that responsibility comes at immense costs, and CISOs are in high demand. It can be challenging to recruit and retain top-level talent for the role, and many smaller or growing organizations—and even some larger older ones—do not employ a traditional, full-time CISO. Instead, they often turn to vCISOs. This is far from a compromise, as vCISOs offer all of the same functionality as their traditional counterparts through an entire team of dedicated service providers rather than a single employee. Since vCISOs are available on a fractional basis, organizations only pay for specific services they need. ... As with all technological breakthroughs, AI is not without its risks and drawbacks. Thankfully, working with a vCISO allows organizations to take advantage of all the benefits of AI while also minimizing its potential downsides. A capable vCISO team doesn’t use AI or any other tool just for the sake of novelty or appearances; their choices are always strategic and risk-informed.


The Transformative Benefits of Enterprise Architecture

Enterprise Architecture review or development is essential for managing complexity, particularly when changes involve multiple systems with intricate interdependencies. ... Enterprise Architecture provides a structured approach to handle these complexities effectively. Often, key stakeholders, such as department heads, project managers, or IT leaders, identify areas of change required to meet new business goals. For example, an IT leader may highlight the need for system upgrades to support a new product launch or a department head might identify process inefficiencies impacting customer satisfaction. These stakeholders are integral to the change process, and the role of the architect is to: Identify and refine the requirements of the stakeholders; Develop architectural views that address concerns and requirements; Highlight trade-offs needed to reconcile conflicting concerns among stakeholders. Without Enterprise Architecture, it is highly unlikely that all stakeholder concerns and requirements will be comprehensively addressed. This can lead to missed opportunities, unanticipated risks, and inefficiencies, such as misaligned systems, redundant processes, or overlooked security vulnerabilities, all of which can undermine business goals and stakeholder trust.


Listen to your technology users — they have led to the most disruptive innovations in history

First, create a culture of open innovation that values insights from outside the organization. While the technical geniuses in your R&D department are experts in how to build something new, they aren’t the only authorities on what it is you should build. Our research suggests that it’s especially important to seek out user-generated disruption at times when customer needs are changing rapidly. Talk to your customers and create channels for dialogue and engagement. Most companies regularly survey users and conduct focus groups. But to identify truly disruptive ideas, you need to go beyond reactions to existing products and plumb unmet needs and pain points. Customer complaints also offer insight into how existing solutions fall short. AI tools make it easier to monitor user communities online and analyze customer feedback, reviews, and complaints. Keep your pulse on social media and online user communities where people share innovative ways to adapt existing products and wish lists for new functionalities. ... Lastly, explore co-creation initiatives that foster direct collaboration with user innovators. For instance, run a contest where customers submit ideas for new products or features, some of which could turn out to be truly disruptive. Or sponsor hackathons that bring together users with needs and technical experts to design solutions.


Guide to Data Observability

Data observability is critical for modern data operations because it ensures systems are running efficiently, detecting anomalies, finding root causes, and actively addressing data issues before they can impact business outcomes. Unlike traditional monitoring, which focuses only on system health or performance metrics, observability provides insights into why something is wrong and allows teams to understand their systems in a more efficient way. In the digital age, where companies rely heavily on data-driven decisions, data observability isn’t only an operational concern but a critical business function. ... When we talk about data observability, we’re focusing on monitoring the data that flows through systems. This includes ensuring data integrity, reliability, and freshness across the lifecycle of the data. It’s distinct from database observability, which focuses more on the health and performance of the databases themselves. ... On the other hand, database observability is specifically concerned with monitoring the performance, health, and operations of a database system—for example, an SQL or MongoDB server. This includes monitoring query performance, connection pools, memory usage, disk I/O, and other technical aspects, ensuring the database is running optimally and serving requests efficiently.


Data maturity and the squeezed middle – the challenge of going from good to great

Breaking through this stagnation does not require a complete overhaul. Instead, businesses can take small but decisive steps. First, they must shift their mindset from seeing data collection as an end in itself, to viewing it as a tool for creating meaningful customer interactions. This means moving beyond static metrics and broad segmentations to dynamic, real-time personalisation. The use of artificial intelligence (AI) can be transformative in this regard. Modern AI tools can analyse customer behaviour in real time, enabling businesses to respond with tailored content, promotions, and experiences. For instance, rather than relying on broad-brush email campaigns, companies can use AI-driven insights to craft (truly) hyper-personalised messages based on individual customer journeys. Such efforts not only improve conversion rates, but also build deeper customer loyalty. ... It’s important to never lose sight of the fact that data maturity is about people and culture as much as tech. Organisations need to foster a culture that values experimentation, learning, and continuous improvement. Behaviourally, this can be uncomfortable for slow-moving or cautious businesses and requires breaking down silos and encouraging cross-functional collaboration. 


Finding a Delicate Balance with AI Regulation and Innovation

The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes, and when errors are still made, transparency will help rectify the situation. It is also essential that regulation tries to prevent AI from being used for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult. The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology ones it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet. The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken.


Quantum Machine Learning for Large-Scale Data-Intensive Applications

Quantum machine learning (QML) represents a novel interdisciplinary field that merges principles of quantum computing with machine learning techniques. The foundation of quantum computing lies in the principles of quantum mechanics, which govern the behavior of subatomic particles and introduce phenomena such as superposition and entanglement. These quantum properties enable quantum computers to perform computations probabilistically, offering potential advantages over classical systems in specific computational tasks ... Integrating quantum machine learning (QML) with traditional machine learning (ML) models is an area of active research, aiming to leverage the advantages of both quantum and classical systems. One of the primary challenges in this integration is the necessity for seamless interaction between quantum algorithms and existing classical infrastructure, which currently dominates the ML landscape. Despite the resource-intensive nature of classical machine learning, which necessitates high-speed computer hardware to train state-of-the-art models, researchers are increasingly exploring the potential benefits of quantum computing to optimize and expedite these processes.


Generative Architecture Twins (GAT): The Next Frontier of LLM-Driven Enterprise Architecture

A Generative Architecture Twin (GAT) is a virtual, LLM-coordinated environment that mirrors — and continuously evolves with — your actual production architecture. ... Despite the challenges, Generative Architecture Twins represent an ambitious leap forward. They propose a world where:Architectural decisions are no longer static but evolve with real-time feedback loops. Compliance, security, and performance are integrated from day one rather than tacked on later. EA documentation isn’t a dusty PDF but a living blueprint that changes as the system scales. Enterprises can experiment with high-risk changes in a safe, cost-controlled manner, guided by autonomous AI that learns from every iteration. As we refine these concepts, expect to see the first prototypes of GAT in innovative startups or advanced R&D divisions of large tech enterprises. A decade from now, GAT may well be as ubiquitous as DevOps pipelines are today. Generative Architecture Twins (GAT) go beyond today’s piecemeal LLM usage and envision a closed-loop, AI-driven approach to continuous architectural design and validation. By combining digital twins, neuro-symbolic reasoning, and ephemeral simulation environments, GAT addresses long-standing EA challenges like stale documentation, repetitive compliance overhead, and costly rework.


Is 2025 the year of (less cloud) on-premises IT?

For an external view here outside of OWC, Vadim Tkachenko, technology fellow and co-founder at Percona thinks that whether or not we’ll see a massive wave of data repatriation take place in 2025 is still hard to say. “However, I am confident that it will almost certainly mark a turning point for the trend. Yes, people have been talking about repatriation off and on and in various contexts for quite some time. I firmly believe that we are facing a real inflection point for repatriation where the right combination of factors will come together to nudge organisations towards bringing their data back in-house to either on-premises or private cloud environments which they control, rather than public cloud or as-a-Service options,” he said. Tkachenko further states that companies across the private sector (and tech in particular) are tightening their purse strings considerably. “We’re also seeing more work on enhanced usability, ease of deployment, and of course, automation. The easier it becomes to deploy and manage databases on your own, the more organizations will have the confidence and capabilities needed to reclaim their data and a sizeable chunk of their budgets,” said the Percona man. It turns out then, cloud is still here and on-premises is still here and… actually, a hybrid world is typically the most prudent route to go down.



Quote for the day:

"The greatest leaders mobilize others by coalescing people around a shared vision." -- Ken Blanchard

Daily Techj Digest - January 11, 2025

Managing Third-Party Risks in the Software Supply Chain

The myriad of third party risks such as, compromised or faulty software updates, insecure hardware or software components and insufficient security practices, expand the attack surface of the organization. A security breach in one such third party entity can ripple through and potentially lead to significant operational disruptions, financial losses and reputational damage to the organization. In view of this, securing not just their own organizations, but also the intricate web of suppliers, vendors and partners that make up their cyber supply chain is not just an option, but a necessity. It is needless to state that managing the third party risks is becoming a big challenge for the Chief Information Security Officers. More to it, it may not just be enough to maanage third-party risks but also fourth party risks as well. ... Mapping your most critical third-party relationships can identify weak links across your extended enterprise. But to be effective, it needs to go beyond third parties. In many cases, risks are often buried within complex subcontracting arrangements and other relationships, within both your supply chain and vendor partnerships. Illuminating your extended network to see beyond third parties is critical to assessing, mitigating and monitoring the risks posed by sub-tier suppliers.


6G, AI and Quantum: Shaping the Future of Connectivity, Computing and Security

Beyond 6G, another transformative technology that will reshape industries in 2025 is quantum computing. This isn’t just about faster processing; it’s about tackling problems that are currently intractable for even the most powerful conventional systems. Think of the implications for AI training itself – imagine feeding massive, complex datasets into quantum-powered algorithms. The potential for breakthroughs in AI research and development is immense. This next-gen computational power is expected to solve complex problems that were previously deemed unsolvable, ushering in a new era of innovation and efficiency. The impact of these developments will be felt in a range of industries such as pharmaceuticals, cryptography and supply chains. For instance, in the pharmaceutical sector, quantum computing is set to speed up drug discovery. ... The rise of distributed cloud models and edge computing will also speed up services and provide value and innovation – placing cloud technology at the centre of every organisation’s strategic roadmap. Leveraging cloud infrastructure allows businesses to rapidly scale AI models, process enormous volumes of data in real-time, and generate actionable insights that facilitate intelligent decision-making. 


Advancing Platform Accountability: The Promise and Perils of DSA Risk Assessments

Multiple risk assessments fail to meaningfully consider risks related to problematic and harmful use and the design or functioning of their service and systems. Facebook’s 2024 risk assessment assesses physical and mental wellbeing in a crosscutting way but does not meaningfully consider risks related to excessive use or addiction. Other assessments more centrally consider physical and mental well-being risks. ... Snap’s risk assessment devotes seven pages to physical and mental well-being risks, but the assessment fails to consider how platform design could contribute to physical and mental well-being risks by incentivizing problematic or harmful use. Snap’s assessment is broadly focused on risks related to harmful content. The assessment describes mitigations to reduce the prevalence of such content that could impact physical and mental well-being – including auto-moderating for abusive content or ensuring recommender systems do not recommend violative content. This, of course, is important. However, the risk assessment and review of mitigations place almost no emphasis on risks of excessive use actually driven by Snap’s design. Snap’s focus on ephemeral content is presented as only a benefit – “conversations on Snapchat delete by default to reflect real-life conversations.”


Hard and Soft Skills Go Hand-in-Hand — These Are the Ones You Need to Sharpen This Year

To most effectively harness the power of AI in 2025, leaders need to understand it. DataCamp's Matt Crabtree describes AI literacy, at its most basic, as having the skills and competencies required to use AI technologies and applications effectively. But it's much more than that: Crabtree points out that AI literacy is also about enabling people to make informed decisions about how they're using AI, understand the implications of those uses and navigate the ethical considerations they present. For leaders, that means understanding biases that remain embedded in AI systems, privacy concerns, and the need for transparency and accountability. Say you're looking to integrate AI into your hiring process, as we have at my company, Jotform. It's important to understand that while it can be used for tasks like scheduling interviews, screening resumes for objective criteria or helping to organize candidate information, it should not be making hiring decisions for you. AI still has a significant bias problem, in addition to the many other ways in which it lacks the soft skills required for certain, human-only tasks. AI literacy is about understanding its shortcomings and navigating them in a way that is fair and equitable.


The Tech Blanket: Building a Seamless Tech Ecosystem

The days of disconnected platforms are over. In 2025, businesses will embrace platform interoperability to ensure that knowledge and data flow seamlessly across departments. Think of your organization’s technology as a woven blanket—each tool and system represents a thread that, when tightly interwoven, creates a strong, cohesive layer of support that covers your entire company. ... Building a seamless ecosystem begins with establishing a framework for managing distributed information. By creating a Knowledge Asset Center of Excellence, organizations can define norms for how data and knowledge are shared and governed. This approach fosters collaboration while allowing teams the flexibility to work in ways that suit their unique needs. ... As platforms become more interconnected, ensuring robust security becomes critical. Data breaches or inaccuracies in one tool can ripple across the ecosystem, creating significant risks. Leaders must prioritize tools with advanced security features, such as encryption and role-based access controls, to protect sensitive information while maintaining seamless interoperability. Strong data governance policies are also essential. By continuously monitoring data flow and usage, organizations can safeguard the integrity of their knowledge assets while promoting responsible collaboration.


WebAssembly and Containers’ Love Affair on Kubernetes

WebAssembly is showing promise on Kubernetes thanks to the fact that WebAssembly now meets the OCI registry standard as OCI artifacts. This enables Wasm to meet the Kubernetes standard and the OCI standard for containerization, specifically the OCI artifact format. It also involves compatibility with Kubernetes pods, storage interfaces and more. In that respect, it’s one step toward using Wasm as an alternative to containers. Additionally, through containerd, WebAssembly components can be distributed side by side with containers in Kubernetes environments. Zhou likened this to a drop-in replacement for the unit’s containers, integrating with tools such as Istio, Dapr and OpenTelemetry Collector. ... When running applications through WebAssembly as sidecars in a cluster, the two main challenges involve distribution and deployment, as Zhou outlined. A naive approach bundles the Wasm runtime into a container, but a better method offloads the Wasm runtime into the shim process in containerd. This approach allows Kubernetes orchestration of Wasm workloads. The OCI artifact format for WebAssembly, enabling Wasm components to use the same distribution mechanisms as containers, is responsible for the distribution part, Zhou said.


Training Employees for the Future with Digital Humans

Digital humans leverage a host of advanced technologies, large language models, retrieval-augmented generation, and intelligent AI orchestrators, among them. They also use unique techniques like kinesthetic learning, or “learning by doing,” alongside on-screen visuals to better illustrate more complicated topics. Note that digital humans are not like traditional chatbots that follow structured dialog trees. Instead, they can respond dynamically to the employee's inputs to ensure interactions are as lifelike as possible. ... By allowing employees to apply their training in real-world scenarios, digital humans help them keep more information in a shorter amount of time, reducing traditional training timelines significantly. As a result, businesses will spend less money and time reskilling personnel. The training possibilities with digital humans are vast, helping employees learn to use new technologies and systems. In a sales setting, personnel can practice using new generative AI-powered customer service tools while a digital human pretends to be a customer. Digital humans could also help engineers in the automotive space learn how to use machine-learning solutions or operate 3D printing machines.


From Silos to Synergy: Transforming Threat Intelligence Sharing in 2025

Put simply, organizations must break down the silos between ALL teams involved in security. This is not just about understanding the organization’s cyber hygiene, but it is also about understanding the layers that an attacker would have to get through to exploit and conduct potentially nefarious activities within the business. Once this insight is gained this enables teams to work through requirements and align the CTI program for specific stakeholders. This means that both offense and defense teams are working together, mapping out the attack path and gaining a better understanding of defense. Doing this will provide a better understanding of offense as teams scout to look at what could be effective, going to the next layer to consider what might be vulnerable and whether they have mitigating controls in place to provide any additional prevention. ... In the past, teams working on-site together would document their work on a whiteboard. Now, with the advent of remote working, there are fewer opportunities to share in person, and a plethora of communication channels that lead to knowledge fragmentation as different people use different tools such as Slack or other messaging platforms, or would just share intelligence one-on-one.


Explained: The Multifaceted Nature of Digital Twins

Beyond operational improvements, digital twins also drive innovation at scale. Large enterprises with multiple R&D hubs can test new designs or processes in a virtual environment before deploying them globally. For example, an automotive company developing an electric vehicle can simulate how it will perform under different driving conditions, regulatory frameworks and consumer preferences in diverse markets - all within a digital twin. ... Building and maintaining a digital twin requires significant investment in IoT infrastructure, cloud computing, AI and skilled personnel. For many companies, particularly small and medium-sized enterprises, these costs can be prohibitive. A McKinsey study highlights that digital maturity - the ability to effectively integrate and utilize advanced technologies - is often a key barrier. Seventy-five percent of companies that have adopted digital-twin technologies are those that have achieved at least medium levels of complexity. Large enterprises can justify the cost of digital twins by applying them across multiple facilities or product lines, but for smaller companies, the benefits may not scale as effectively, making it harder to achieve a return on investment.


Design Patterns for Building Resilient Systems

You may have some parts of your system that are degrading performance and may be affecting cascading failures everywhere. So that means that when your client requests a specific part that’s working fine, it’s great, but you want to stop immediately what’s causing the fire. That way, you have different load balancing rules that I’ve defined here to say, okay, this part of our system is degrading performance; it’s starting to affect everything else, and it’s cascading failures. We’re just going to stop it so you can’t even make a request to this route because it’s the one causing all the issues. Having your clients handle that failure to that request gracefully can be incredibly important because then the rest of your system can still work. Maybe some particular routes you’re defining aren’t going to work; some parts of your system will just be unavailable, but it’s not taking down the entire thing. Ultimately, what I’m talking about there is bulkheads. ... Now, while the CrowdStrike incident didn’t directly affect me, it sure did indirectly because I knew about it right away from the alarms based on metrics. When used correctly within context, design patterns allow you to build a resilient system. Now, everything we had in place for resilience helped; they worked. But as always, when something like this happens, it makes you re-evaluate specific individual contexts. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - January 10, 2025

Meta puts the ‘Dead Internet Theory’ into practice

In the old days, when Meta was called Facebook, the company wrapped every new initiative in the warm metaphorical blanket of “human connection”—connecting people to each other. Now, it appears Meta wants users to engage with anyone or anything—real or fake doesn’t matter, as long as they’re “engaging,” which is to say spending time on the platforms and money on the advertised products and services. In other words, Meta has so many users that the only way to continue its previous rapid growth is to build users out of AI. The good news is that Meta’s “Dead Internet” projects are not going well. ... Meta is testing a program called “Creator AI,” which enables influencers to create AI-generated bot versions of themselves. These bots would be designed to look, act, sound, and write like the influencers who made them, and would be trained on the wording of their posts. The influencer bots would engage in interactive direct messages and respond to comments on posts, fueling the unhealthy parasocial relationships millions already have with celebrities and influencers on Meta platforms. The other “benefit” is that the influencers could “outsource” fan engagement to a bot. ... “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Connor Hayes, vice president of product for generative AI at Meta, said


Experts Highlight Flaws within Government’s Data Request Mandate Under DPDP Rules 2025

Tech Lawyer Varun Sen Bahl also points out the absence of an appellate mechanism for such ‘calls for information’ by the Central government, explaining that such an appeal process only extends against orders of the Data Protection Board. He explains, “This is problematic because it leaves Data Fiduciaries and Data Principals with no clear recourse against excessive data collection requests made under Section 36 read with Rule 22“. Bahl also notes that the provision lacks specific mention of guardrails like the European Union’s data minimisation principle under the General Data Protection Regulation (GDPR) while furnishing such information requests. ... Roy argues that the compliance burdens on Data Fiduciaries will increase and aggravate through sweeping requests and by invoking the non-disclosure clause. To explain, he cites the case of the Razorpay-AltNews situation in 2022, when the Government accessed the names and transaction details of the news platform’s donors via Razorpay ... To ensure that government officers and agencies don’t abuse this provision, Roy explains that “Fiduciaries must [as part of corporate governance] give periodic reports of the number of such demands”. Similarly, law enforcement and other agencies should also submit periodic reports of such requests to the Data Protection Board comprising details of cases where the non-disclosure clause is invoked.


How Edge Computing can Give OEMs a Competitive Advantage

Latency matters in warehouse automation too. Performing predictive maintenance on a shoe sorter, for example, could require real-time monitoring of actuators that do diversions every 40 milliseconds. Component-level computing power allows the system to respond to changing conditions with speed and efficiency levels that simply wouldn’t be possible with a cloud-based system. ... Edge components can also communicate with a system’s programmable logic controllers (PLCs), making their data immediately available to end users. Supporting software on the customer’s local network interprets this information, enabling predictive maintenance and other real-time insights while tracking historical trends over time. ... Edge technology enables you to build assets that deliver higher utilization to your customers. Much of this benefit comes from the greater efficiencies of predictive maintenance. Users have less downtime because unnecessary service is reduced or eliminated, and many problems can be resolved before they cause unplanned shutdowns. Smart components can also deliver more process consistency. Ordinarily, parts degrade over time, gradually losing speed and/or power. With edge capabilities, they can continuously adapt to changing conditions, including varying parcel weights and normal wear.


Have we reached the end of ‘too expensive’ for enterprise software?

LLMs are now changing the way companies approach problems that are difficult or impossible to solve algorithmically, although the term “language” in Large Language Models is misleading. ... GenAI enables a variety of features that were previously too complex, too expensive, or completely out of reach for most organizations because they required investments in customized ML solutions or complex algorithms. ... Companies need to recognize generative AI for what it is: a general-purpose technology that touches everything. It will become part of the standard software development stack, as well as an integral enabler of new or existing features. Ensuring the future viability of your software development requires not only acquiring AI tools for software development but also preparing infrastructure, design patterns and operations for the growing influence of AI. As this happens, the role of software architects, developers, and product designers will also evolve. They will need to develop new skills and strategies for designing AI features, handling non-deterministic outputs, and integrating seamlessly with various enterprise systems. Soft skills and collaboration between technical and non-technical roles will become more important than ever, as pure hard skills become cheaper and more automatable.


Is prompt engineering a 'fad' hindering AI progress?

Motivated by the belief that "a well-crafted prompt is essential for obtaining accurate and relevant outputs from LLMs," aggressive AI users -- such as ride-sharing service Uber -- have created whole disciplines around the topic. And yet, there is a reasoned argument to be made that prompts are the wrong interface for most users of gen AI, including experts. "It is my professional opinion that prompting is a poor user interface for generative AI systems, which should be phased out as quickly as possible," writes Meredith Ringel Morris, principal scientist for Human-AI Interaction for Google's DeepMind research unit, in the December issue of computer science journal Communications of the ACM. Prompts are not really "natural language interfaces," Morris points out. They are "pseudo" natural language, in that much of what makes them work is unnatural. ... In place of prompting, Morris suggests a variety of approaches. These include more constrained user interfaces with familiar buttons to give average users predictable results; "true" natural language interfaces; or a variety of other "high-bandwidth" approaches such as "gesture interfaces, affective interfaces (that is, mediated by emotional states), direct-manipulation interfaces


Building Resilience Into Cyber-Physical Systems Has Never Been This Mission-Critical

In our quest for cyber resilience, we sometimes—mistakenly—fixate on hypothetical doomsday scenarios. While this apocalyptic and fear-based thinking can be an instinctual response to the threats we face, it is not realistic or helpful. Instead, we must champion the progress, even incremental, that is achievable through focused, pragmatic measures—like cyber insurance. By reframing discussions around tangible outcomes such as financial stability and public safety, we can cultivate a clearer sense of priorities. Regulatory frameworks may eventually align incentives towards better cybersecurity practices, but in the interim, transferring risk via a measure like cyber insurance offers a potent mechanism to enhance visibility into risk mitigation strategies and implement better cyber hygiene accordingly. By quantifying potential losses and incentivizing proactive security measures, cyber insurance can catalyze a necessary, and overdue cultural shift towards resilience-oriented practices—and a safer world. We stand at a pivotal moment in American critical infrastructure cybersecurity. As hackers threaten to sabotage our vital systems for ransom, the financial damages ensued from incidents like Halliburton oblige us to stay alert and act proactively. 


Don't Fall Into the 'Microservices Are Cool' Trap and Know When to Stick to Monolith Instead

Over time, as monolith applications become less and less maintainable, some teams decide that the only way to solve the problem is to start refactoring by breaking their application into microservices. Other teams make this decision just because "microservices are cool." This process takes a lot of time and sometimes brings even more maintenance overhead. Before going into this, it's crucial to carefully consider all the pros and cons and ensure you've reached your current monolith architecture limits. And remember, it is easier to break than to build. ... As you see, the modular monolith is the way to get the best from both worlds. It is like running independent microservices inside a single monolith but avoiding collateral microservices overhead. One of the limitations you may have – is not being able to scale different modules independently. You will have as many monolith instances as required by the most loaded module, which may lead to excessive resource consumption. The other drawback is the limitations of using different technologies. ... When running a monolith application, you can usually maintain a simpler infrastructure. Options like virtual machines or PaaS solutions (such as AWS EC2) will suffice. Also, you can handle much of the scaling, configuration, upgrades, and monitoring manually or with simple tools. 


SEC rule confusion continues to put CISOs in a bind a year after a major revision

“There is so much fear out there right now because there is a lack of clarity,” Sullivan told CSO. “The government is regulating through enforcement actions, and we get incomplete information about each case, which leads to rampant speculation.” As things stand, CISOs and their colleagues must chart a tricky course in meeting reporting requirements in the event of a cyber security incident or breach, Shusko says. That means anticipating the need to deal with reporting requirements by making compliance preparation part of any incident response plan, Shusko says. If they must make a cyber incident disclosure, companies should attempt to be compliant and forthcoming while seeking to avoid releasing information that could inadvertently point towards unresolved security shortcomings that future attackers might be able to exploit. ... Given that clarity around disclosure isn’t always straightforward, there is no real substitute for preparedness, and that makes it essential to practise situations that would require disclosure through tabletops and other exercises, according to Simon Edwards, chief exec of security testing firm SE Labs. “Speaking as someone who is invested heavily in the security of my company, I’d say that the most obvious and valuable thing a CISO can do is roleplay through an incident.”


How adding capacity to a network could reduce IT costs

Have you heard the phrase “bandwidth economy of scale?” It’s a sophisticated way of saying that the cost per bit to move a lot of bits is less than it is to move a few. In the decades that information technology evolved from punched cards to PCs and mobile devices, we’ve taken advantage of this principle by concentrating traffic from the access edge inward to fast trunks. ... Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it.


Ephemeral environments in cloud-native development

An emerging trend in cloud computing is using ephemeral environments for development and testing. Ephemeral environments are temporary, isolated spaces created for specific projects. They allow developers to swiftly spin up an environment, conduct testing, and then dismantle it once the task is complete. ... At first, ephemeral environments sound ideal. The capacity for rapid provisioning aligns seamlessly with modern agile development philosophies. However, deploying these spaces is fraught with complexities that require thorough consideration before wholeheartedly embracing them. ... The initial setup and ongoing management of ephemeral environments can still incur considerable costs, especially in organizations that lack effective automation practices. If one must spend significant time and resources establishing these environments and maintaining their life cycle, the expected savings can quickly diminish. Automation isn’t merely a buzzword; it requires investment in tools, training, and sometimes a cultural shift within the organization. Many enterprises may still be tethered to operational costs that can potentially undermine the presumed benefits. This seems to be a systemic issue with cloud-native anything.



Quote for the day:

"The best leader brings out the best in those he has stewardship over." -- J. Richard Clarke

Daily Tech Digest - January 09, 2025

It’s remarkably easy to inject new medical misinformation into LLMs

By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned. This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web." ... rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term "incidental" data poisoning due to "existing widespread online misinformation." But a lot of that "incidental" information was generally produced intentionally, as part of a medical scam or to further a political agenda. ... Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence.


CIOs are rethinking how they use public cloud services. Here’s why.

Where are those workloads going? “There’s a renewed focus on on-premises, on-premises private cloud, or hosted private cloud versus public cloud, especially as data-heavy workloads such as generative AI have started to push cloud spend up astronomically,” adds Woo. “By moving applications back on premises, or using on-premises or hosted private cloud services, CIOs can avoid multi-tenancy while ensuring data privacy.” That’s one reason why Forrester predicts four out of five so called cloud leaders will increase their investments in private cloud by 20% this year. That said, 2025 is not just about repatriation. “Private cloud investment is increasing due to gen AI, costs, sovereignty issues, and performance requirements, but public cloud investment is also increasing because of more adoption, generative AI services, lower infrastructure footprint, access to new infrastructure, and so on,” Woo says. ... Woo adds that public cloud is costly for workloads that are data-heavy because organizations are charged both for data stored and data transferred between availability zones (AZ), regions, and clouds. Vendors also charge egress fees for data leaving as well as data entering a given AZ. “So for transfers between AZs, you essentially get charged twice, and those hidden transfer fees can really rack up,” she says. 


What CISOs Think About GenAI

“As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.” However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud. Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year. “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”


Scaling RAG with RAGOps and agents

To maximize their effectiveness, LLMs that use RAG also need to be connected to sources from which departments wish to pull data – think customer service platforms, content management systems and HR systems, etc. Such integrations require significant technical expertise, including experience with mapping data and managing APIs. Also, as RAG models are deployed at scale they can consume significant computational resources and generate large amounts of data. This requires the right infrastructure as well as the experience to deploy it, as well as the ability to manage data it supports across large organizations. One approach to mainstreaming RAG that has AI experts buzzing is RAGOps, a methodology that helps automate RAG workflows, models and interfaces in a way that ensures consistency while reducing complexity. RAGOps enables data scientists and engineers to automate data ingestion and model training, as well as inferencing. It also addresses the scalability stumbling block by providing mechanisms for load balancing and distributed computing across the infrastructure stack. Monitoring and analytics are executed throughout every stage of RAG pipelines to help continuously refine and improve models and operations.


Navigating Third-Party Risk in Procurement Outsourcing

Shockingly, only 57% of organisations have enterprise-wide agreements that clearly define which services can or cannot be outsourced. This glaring gap highlights the urgent need to create strong frameworks – not just for external agreements, but also for intragroup arrangements. Internal agreements, though frequently overlooked, demand the same level of attention when it comes to governance and control. Without these solid frameworks, companies are leaving themselves exposed to risks that could have been mitigated with just a little more attention to detail. Ongoing monitoring is also crucial to TPRM; organisations must actively leverage audit rights, access provisions and outcome-focused evaluations. This means assessing operational and concentration risks through severe yet plausible scenarios, ensuring they’re prepared for the worst-case while staying vigilant in everyday operations. ... As the complexity of third-party risk grows, so too does the role of AI and automation. The days of relying on spreadsheets and homegrown databases are long gone. Ed’s thoughts on this topic are unequivocal: “AI and automation are critical as third-party risk becomes increasingly complex. Significant work is required for initial risk assessments, pre-contract due diligence, post-contract monitoring, SLA reviews and offboarding.”


Five Ways Your Platform Engineering Journey Can Derail

Chernev’s first pitfall is when a company tries to start platform engineering by only changing the name of its current development practices, without doing the real work. “Simply rebranding an existing infrastructure or DevOps or SRE practice over to platform engineering without really accounting for evolving the culture within and outside the team to be product-oriented or focused” is a huge mistake ... Another major pitfall, he said, is not having and maintaining product backlogs — prioritized lists of work for the development team — that are directly targeting your developers. “For the groups who have backlogs, they are usually technology-oriented,” he said. “That misalignment in thinking across planning and missing feedback loops is unlikely to move progress forward within the organization. That ultimately leads the initiative to fail to deliver business value. Instead, they should be developer-centric,” said Chernev. ... This is another important point, said Chernev — companies that do not clearly articulate the value-add of their platform engineering charter to both technical and non-technical stakeholders inside their operations will not fully be able to reap the benefits of the platform’s use across the business.


Building generative AI applications is too hard, developers say

Given the number of tools they need to do their job, it’s no surprise that developers are loath to spend a lot of time adding another to their arsenal. Two thirds of them are only willing to invest two hours or less in learning a new AI development tool, with a further 22% allocating three to five hours, and only 11% giving more than five hours to the task. And on the whole, they don’t tend to explore new tools very often — only 21% said they check out new tools monthly, while 78% do so once every one to six months, and the remaining 2% rarely or never. The survey found that they tend to look at around six new tools each time. ... The survey highlights the fact that, while AI and generative AI are becoming increasingly important to businesses, the tools and techniques require to develop them are not keeping up. “Our survey results shed light on what we can do to help address the complexity of AI development, as well as some tools that are already helping,” Gunnar noted. “First, given the pace of change in the generative AI landscape, we know that developers crave tools that are easy to master.” And, she added, “when it comes to developer productivity, the survey found widespread adoption and significant time savings from the use of AI-powered coding tools.”


AI infrastructure – The value creation battleground

Scaling AI infrastructure isn’t just about adding more GPUs or building larger data centers – it’s about solving fundamental bottlenecks in power, latency, and reliability while rethinking how intelligence is deployed. AI mega clusters are engineering marvels – data centers capable of housing hundreds of thousands of GPUs and consuming gigawatts of power. These clusters are optimized for machine learning workloads with advanced cooling systems and networking architectures designed for reliability at scale. Consider Microsoft’s Arizona facility for OpenAI: with plans to scale up to 1.5 gigawatts across multiple sites, it demonstrates how these clusters are not just technical achievements but strategic assets. By decentralizing compute across multiple data centers connected via high-speed networks, companies like Google are pioneering asynchronous training methods to overcome physical limitations such as power delivery and network bandwidth. Scaling AI is an energy challenge. AI workloads already account for a growing share of global data center power demand, which is projected to double by 2026. This creates immense pressure on energy grids and raises urgent questions about sustainability.


4 Leadership Strategies For Managing Teams In The Metaverse

Leaders must develop new skills and adopt innovative strategies to thrive in the metaverse. Here are some key approaches:Invest in digital literacy—Leaders must become fluent in the tools and technologies that power the metaverse. This includes understanding VR/AR platforms, blockchain applications and collaborative software such as Slack, Trello and Figma. Emphasize inclusivity—The metaverse has the potential to democratize access to opportunities, but only if it’s designed with inclusivity in mind. Leaders should ensure that virtual spaces are accessible to employees of all abilities and backgrounds. This might include providing hardware like VR headsets or ensuring platforms support diverse communication styles. Create rituals for connection—Leaders can foster connection through virtual rituals and gatherings in the absence of physical offices. These activities, from weekly team check-ins to informal virtual “watercooler” chats, help build camaraderie and maintain a sense of community. Focus on well-being—Effective leaders prioritize employee well-being by setting clear boundaries, encouraging breaks and supporting mental health.


How AI will shape work in 2025 — and what companies should do now

“The future workforce will likely collaborate more closely with AI tools. For example, marketers are already using AI to create more personalized content, and coders are leveraging AI-powered code copilots. The workforce will need to adapt to working alongside AI, figuring out how to make the most of human strengths and AI’s capabilities. “AI can also be a brainstorming partner for professionals, enhancing creativity by generating new ideas and providing insights from vast datasets. Human roles will increasingly focus on strategic thinking, decision-making, and emotional intelligence. ... “Companies should focus on long-term strategy, quality data, clear objectives, and careful integration into existing systems. Start small, scale gradually, and build a dedicated team to implement, manage, and optimize AI solutions. It’s also important to invest in employee training to ensure the workforce is prepared to use AI systems effectively. “Business leaders also need to understand how their data is organized and scattered across the business. It may take time to reorganize existing data silos and pinpoint the priority datasets. To create or effectively implement well-trained models, businesses need to ensure their data is organized and prioritized correctly.



Quote for the day:

"The world is starving for original and decisive leadership." -- Bryant McGill