Showing posts with label outsourcing. Show all posts
Showing posts with label outsourcing. Show all posts

Daily Tech Digest - April 30, 2026


Quote for the day:

"You've got to get up every morning with determination if you're going to go to bed with satisfaction." --George Lorimer

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 15 mins • Perfect for listening on the go.


The dreaded IT audit: How to get through it and what to avoid

The article "The dreaded IT audit: how to get through it and what to avoid" from IT Pro encourages organizations to reframe the auditing process as a strategic business asset rather than a burdensome cost center. Successfully navigating an audit requires maintaining a comprehensive, up-to-date inventory of all technology assets—including those used by remote workforces—to ensure security, safety, and insurance compliance. Even startups should establish structured auditing processes, as these evaluations proactively identify vulnerabilities and optimize operational efficiency. To streamline the experience, the article recommends prioritizing high-risk areas, such as software licensing, and utilizing customized spot checks instead of repetitive, standardized reviews that may fail to uncover meaningful insights. Crucially, leaders must adopt an open-minded approach to findings; the goal is to engage in transparent discussions about discovered issues rather than becoming defensive. Key pitfalls to avoid include treating the audit as a one-time administrative hurdle, relying on outdated manual tracking methods, and ignoring the gathered data. Instead, organizations should leverage audit results to inform staff training and drive practical improvements. By viewing the audit as a strategic opportunity for growth, companies can significantly strengthen their cybersecurity posture and ensure long-term sustainability in a digital economy.


Privacy in the AI era is possible, says Proton's CEO, but one thing keeps him up at night

In a wide-ranging interview at the Semafor World Economy Summit, Proton CEO Andy Yen addressed the critical tension between the rapid advancement of artificial intelligence and the fundamental right to digital privacy. Yen voiced significant concerns regarding the current AI trajectory, arguing that the industry's reliance on massive data harvesting inherently threatens individual security. He advocated for a paradigm shift toward "privacy-first AI," where processing occurs locally on user devices or through end-to-end encrypted frameworks to ensure that personal information remains inaccessible to service providers. Unlike the advertising-driven models of Silicon Valley giants, Yen highlighted Proton’s commitment to a subscription-based business model, which avoids the ethical pitfalls of monetizing user data. He also explored the "privacy paradox," observing that while users value their data, they often succumb to the convenience of free platforms. To counter this, Proton is expanding its ecosystem with tools like encrypted email and small language models designed specifically for security. Ultimately, Yen emphasized that the future of the digital economy hinges on stricter regulatory enforcement and the adoption of decentralized technologies that empower users with absolute control over their information, rather than treating them as products to be sold.


Outsourcing contracts weren't built for AI. CIOs are renegotiating now

The rapid advancement of generative artificial intelligence is necessitating a major overhaul of IT outsourcing agreements, as traditional contracts centered on headcount and billable hours prove incompatible with AI-driven efficiency. This InformationWeek article explains that while service providers promise productivity gains of up to 70%, legacy full-time equivalent (FTE) models fail to account for this increased output, leading CIOs to aggressively renegotiate for outcome-based pricing. This shift allows organizations to pay for specific results rather than human time, yet it introduces significant legal complexities. Key concerns include data sovereignty—where proprietary data might inadvertently train a provider's large language model—and intellectual property risks regarding the ownership of AI-generated code. Furthermore, the ability of AI to automate routine tasks is prompting some enterprises to bring previously outsourced functions back in-house, as smaller internal teams can now manage workloads that once required massive offshore cohorts. To navigate these challenges, technical leaders are implementing "gain-sharing" frameworks and rigorous governance standards to manage risks like AI hallucinations and liability. Ultimately, CIOs are assuming a more central role in procurement to ensure that vendor incentives align with genuine innovation and that the financial benefits of automation are captured by the enterprise.


Bad bots make up 40% of internet traffic

The "2026 Thales Bad Bot Report: Bad Bots in the Agentic Age" reveals a transformative shift in internet traffic, where automated activity now accounts for 53% of all web interactions, surpassing human traffic for the second consecutive year. Malicious "bad bots" alone comprise 40% of global traffic, highlighting a growing threat landscape. A critical finding is the 12.5x surge in AI-driven bot attacks, fueled by the rapid adoption of agentic AI which blurs the lines between legitimate and harmful automation. These advanced bots are increasingly targeting APIs, with 27% of attacks now bypassing traditional interfaces to exploit backend logic directly at machine speed. The financial services sector remains the most vulnerable, suffering 24% of all bot attacks and nearly half of all account takeover incidents. Thales experts, including Tim Chang, emphasize that the primary security challenge has evolved from simple bot identification to the complex analysis of behavioral intent. As AI agents emerge as a new traffic category, organizations must transition to proactive, intent-based defenses that can distinguish between helpful AI agents and malicious automation. This machine-driven era necessitates deeper visibility into API traffic and identity systems to maintain trust and security across modern digital infrastructures.


Incentive drift: Why transformation fails even when everything looks green

In the article "Incentive Drift: Why Transformation Fails Even When Everything Looks Green," Mehdi Kadaoui explores the paradoxical failure of IT transformations that appear successful on paper. The central challenge is "incentive drift"—the structural separation of authority from accountability that leads organizations to optimize for project delivery rather than business value. This drift manifests through several destructive patterns: the "ownership vacuum," where strategy and execution are disconnected; the "budgetary firewall," which isolates capital spending from operational costs; and "language capture," where success definitions are subtly redefined to ensure "green" status. Kadaoui argues that "collective amnesia" often follows, as organizations quietly lower their expectations to avoid acknowledging failure. To resolve this, he proposes making drift "structurally expensive" through three key mechanisms. First, a "value prenup" requires operational leaders to explicitly own and sign off on intended outcomes before development begins. Second, a "cost mirror" forces transparency across budget ledgers. Finally, a "semantic anchor" ensures original goals are read aloud in every governance meeting to prevent meaning erosion. By grounding digital transformation in rigid accountability and linguistic clarity, leadership can ensure that technological outputs translate into genuine, durable enterprise value.


How to Be a Great Data Steward: 6 Core Skills to Build

The article "Core Data Stewardship Skills to Build" emphasizes that effective data stewardship requires a unique blend of technical proficiency, business acumen, and interpersonal skills. High-performing stewards act as "purple people," bridging the gap between IT and business by translating complex technical standards into actionable business practices. Key operational activities include identifying and documenting Critical Data Elements (CDEs), aligning them with precise business terms, and performing data profiling to identify quality issues. Beyond basic documentation, stewards must master data classification to ensure regulatory compliance with frameworks like GDPR or HIPAA. Analytical thinking is essential for interpreting patterns and uncovering root causes of data inconsistencies, while strong communication skills enable stewards to foster a collaborative, data-driven culture. Furthermore, literacy in adjacent domains such as metadata management, master data management (MDM), and the use of modern data catalogs is vital. Ultimately, the role is outcome-driven; stewards do not just manage data for its own sake but focus on ensuring data health to drive measurable organizational value. By combining attention to detail with strategic consistency, data stewards serve as the essential operational guardians who transform raw data into a reliable, high-quality strategic asset for their organizations.


Researchers unearth industrial sabotage malware that predated Stuxnet by 5 years

Researchers from SentinelOne recently uncovered a sophisticated malware framework, dubbed "Fast16," that predates the infamous Stuxnet worm by five years. Active as early as 2005, this discovery shifts the timeline of state-sponsored industrial sabotage, proving that nation-states were deploying cyberweapons against physical infrastructure much earlier than previously understood. Unlike typical espionage tools designed for data theft, Fast16 was engineered for strategic sabotage by targeting high-precision floating-point arithmetic operations within engineering modeling software. By corrupting the logic of the Floating Point Unit (FPU), the malware produced subtly altered outputs in complex simulations, potentially leading to catastrophic real-world failures. The researchers identified three specific targeted engineering programs, including one previously associated with Iran’s AMAD nuclear program and another widely used in Chinese structural design. The modular nature of Fast16, which utilizes encrypted Lua bytecode, underscores its advanced design and national importance. This finding highlights a historical precedent for cyberattacks on critical workloads in fields such as advanced physics and nuclear research. Ultimately, Fast16 serves as a significant harbinger for modern industrial sabotage, demonstrating that the transition from strategic espionage to physical disruption in cyberspace was already in full swing two decades ago, long before Stuxnet gained global notoriety.


How AI Is Transforming Business Continuity and Crisis Response

Charlie Burgess’s article, "How AI Is Transforming Business Continuity and Crisis Response," explores the pivotal role of artificial intelligence in navigating the complexities of modern digital and physical risks. As businesses face increasingly non-linear threats, from supply chain disruptions to cyber incidents, the abundance of generated data often leads to information overload. AI addresses this by acting as a sophisticated data analysis tool that parses vast information streams to identify hidden patterns and suppress low-priority noise. This allows crisis teams to focus on critical alerts and early warning signs. Furthermore, AI enhances situational awareness and coordination by correlating disparate system inputs and surfacing standardized playbook responses. During active incidents, technologies like AI-powered cameras provide real-time visibility, aiding in personnel safety and evacuation efforts. Beyond immediate response, AI suggests optimized recovery paths and strategic resource allocation, fostering long-term operational resilience. Ultimately, the integration of AI is not intended to replace human judgment but to empower decision-makers with actionable insights and agility. By bridging the gap between data collection and decisive action, AI transforms business continuity from a reactive necessity into a proactive, evidence-based strategic asset that safeguards both personnel and organizational stability in an unpredictable global landscape.


Europe Gliding Toward Mandatory Online Age Verification

The European Commission is accelerating its push toward mandatory online age verification, driven by the Digital Services Act's requirements to protect minors from harmful content. Central to this initiative is a new age assurance framework and a "technically ready" open-source mobile app designed to allow users to prove they are over a certain age using national identity documents without disclosing their full identity. However, this transition faces intense scrutiny. Security researchers recently identified significant vulnerabilities in the commission's prototype app, labeling it "easily hackable." Furthermore, privacy advocates, such as representatives from Tuta, warn that centralized age verification creates a lucrative "gold mine" for hackers, potentially exacerbating risks like phishing and identity theft. Despite these concerns, European officials like Henna Virkkunen emphasize that the DSA demands concrete action over mere terms of service, particularly following allegations that platforms like Meta have failed to adequately exclude children under thirteen. As several European nations consider raising minimum age requirements for social media, the commission continues to advocate for "robust and non-discriminatory" verification tools that can be integrated into national digital wallets, insisting that ongoing security testing will eventually yield a reliable solution for safeguarding the digital environment for children.


CodeGuardian: A Model Context Protocol Server for AI-Assisted Code Quality Analysis and Security Scanning

"CodeGuardian: A Model Context Protocol Server for AI-Assisted Code Quality Analysis and Security Scanning" introduces a breakthrough tool designed to integrate enterprise-grade security and quality checks directly into AI-powered development environments. Authored by Madhvesh Kumar and Deepika Singh, the article details how CodeGuardian leverages the Model Context Protocol (MCP) to extend coding assistants with eleven specialized analysis tools. This integration eliminates the friction of context-switching by allowing developers to execute security scans, identify hardcoded secrets across multiple layers, and generate compliant Software Bill of Materials (SBOM) using simple natural language prompts. Unlike traditional static analysis tools that merely flag issues, CodeGuardian provides context-aware, "drop-in" code remediations tailored to a project's specific framework and style. A core feature is its cross-layer security reporting, which aggregates findings into a single risk score, exposing systemic vulnerabilities that isolated scanners often miss. By shifting security "left" into the immediate coding workflow, the tool empowers developers to build more resilient software while maintaining high delivery velocity. Ultimately, CodeGuardian represents a pivot toward "agentic" security, where AI assistants act as proactive guardians of code integrity throughout the development lifecycle, effectively bridging the gap between rapid feature delivery and robust organizational compliance.

Daily Tech Digest - November 26, 2025


Quote for the day:

“There is only one thing that makes a dream impossible to achieve: the fear of failure.” -- Paulo Coelho



7 signs your cybersecurity framework needs rebuilding

The biggest mistake, Pearlson says, is failing to recognize that the current plan is out of date or simply not working. Breaches happen, but that doesn’t always mean your cyber framework needs rebuilding. It does, however, indicate that the framework needs to be rethought and redesigned. ... “If your framework hasn’t kept pace with evolving threats or business needs, it’s time for a rebuild.” Cyber threats are always evolving, so staying proactive with regular reviews and fostering a culture of cybersecurity awareness will help catch issues before they become crises, Bucher says. ... “The cybersecurity landscape has evolved rapidly, especially with the rise of generative AI — your framework should reflect these shifts.” McLeod recommends a complete a biannual framework review combined with a cursory review during the gap years. “This helps to ensure that the framework stays aligned with evolving threats, business changes, and regulatory requirements.” Ideally, security leaders should always have their security framework in mind while maintaining a rough, running list of areas that could be improved, streamlined, or clarified, McLeod suggests. ... If an organization is stuck in a cycle of continually chasing alerts and incidents, as well as reporting events after the fact instead of performing predictive threat assessments, data analysis, and forward planning, it’s time for a change, Baiati advises. 


Your Million-Dollar IIoT Strategy is Being Sabotaged by Hundred-Dollar Radios

The ambition is clear: to create hyper-efficient, data-driven operations in a market expected to exceed $1.6 billion by 2030. Yet, a fundamental paradox lies at the heart of this transformation. While we architect complex digital twins and deploy sophisticated AI models, the foundational tools entrusted to our most valuable asset—the frontline workforce—are often decades old, disconnected, and failing at an alarming rate. ... Data shows that one in four organizations loses more than an entire day of productivity every month simply dealing with broken technology. The primary culprits are as predictable as they are preventable: nearly half of workers cite battery problems (48.4%) and physical damage (46.8%) as the most common causes of failure. ... While conversations about this crisis often focus on pay and career paths, Relay’s research reveals a more immediate, tangible cause: the daily frustration of using broken tools. 1 in 4 frontline workers already feel their equipment is second-class compared to what their corporate counterparts use, and a staggering 43% of workers saying they’d be less likely to quit if guaranteed access to modern, automatically upgraded devices. ... Beyond reliability, it’s important to address the data black hole created by legacy, disconnected tools. Every day, frontline teams generate thousands of hours of spoken communication—a rich stream of unstructured data filled with maintenance alerts, safety concerns, and process bottlenecks. 


Ask the Experts: Validate, don't just migrate

"Refactoring code is certainly a big undertaking. And if you start before you have good hygiene and governance, then you're just setting yourself up for failure. Similarly, if you haven't tagged properly, you have no way to attribute it to the project, and that becomes a cost problem." ... "If you do conclude [that migration is necessary], then you really must make sure the application is architected right. A lot of times, these workloads weren't designed for the cloud world, so you must adapt them and deliberately architect them for a cloud workload. "[To prepare a mission-critical application], it's key to look at the appropriateness, operating system [and] licenses. Sometimes, there are licenses tied to CPUs or other things that might introduce issues for you as well, so regression, latency and performance testing will be mandatory. ... "[IT leaders must also understand] the risks and costs associated with taking things into the cloud, and the pros and cons of that versus leaving it alone. Because old stuff, whether it was [procured] yesterday or five years ago, is inherently going to be vulnerable from a cybersecurity standpoint. Risk No. 2 is interoperability and compatibility, because old stuff doesn't talk to new stuff. And the third one is supportability, because it's hard to find old people to support old systems. ... "Sometimes, people have the false sense that if it's in cloud, then I'm all set. Everything is available, and everything is highly redundant. And it is, if you design [the application] with those things in mind.


Heineken CISO champions a new risk mindset to unlock innovation

Starting as an auditor and later leading a cyber defense team. It’s easy to fall into the black-and-white trap of being the function that always says “no” or speaks in cryptic tech jargon. It’s a scary world out there with so many attacks happening in every industry. The classical reaction of most security professionals is to tighten defences and impose even more rules. ... CISOs need to shift the mindset from pure compliance to asking: How does our cyber strategy support the business and its values? What calculated risks do we want the business to take? Where do we need their attention and help to embed security into the DNA of our people and our company? ... Be visible and approachable. Share the lessons that shaped you as a leader, what worked, what didn’t, and the principles that guide your decisions. I’m passionate about building diverse teams where everyone gets the same opportunities, no matter age, gender, or background. Diversity makes us stronger, and when there’s trust and openness, it sparks mentoring, coaching, and knowledge sharing. Make coaching and mentoring non-negotiable, and carve out time for it. It’s easy to push aside when you’re busy putting out security fires, but neglecting people’s growth and well-being is a big miss. Be authentic and vulnerable, walk the talk. Share the real stories, including failures and what made you stronger. Too often, people focus only on titles, certifications, and tech skills.


Data-Driven Enterprise: How Companies Turn Data into Strategic Advantage

A data-driven enterprise is not defined by the number of dashboards or analytics tools it owns. It’s defined by its ability to turn raw information into intelligent action. True data-driven organizations embed data thinking into every level of decision-making from boardroom strategy to day-to-day operations. ... A modern data architecture is not a single platform, but an interconnected ecosystem designed to balance agility, governance, and scalability. ... As organizations mature in their data journey, they are moving away from rigid, centralized models that rely on a single source of truth. While centralization once ensured control, it often created bottlenecks slowing down innovation and limiting agility.  ... We are entering an era of data agents self-learning systems capable of autonomously detecting anomalies, assessing risks, and forecasting trends in real time. These intelligent agents will soon become the invisible workforce of the enterprise, operating across domains: predicting supply chain disruptions, optimizing IT performance, personalizing customer journeys, and ensuring compliance through continuous monitoring. Their actions will reshape not only operations but also how organizations think about governance, accountability, and human oversight. For architects, this shift represents both a challenge and an extraordinary opportunity. The role is evolving from that of a data custodian focused on structure and governance to an ecosystem designer who engineers environments where data and AI can coexist, learn, and continuously create value.


10 benefits of an optimized third-party IT services portfolio

By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most. ... A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities. Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent. ... IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption. ... With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. 


Modernizing SOCs with Agentic AI and Human-in-the-Loop: A Guide to CISOs

Traditional SOCs were not built for today’s speed and scale. Alert fatigue, manual investigations, disconnected tools, and talent shortages all contribute to the operational drag. Many security leaders are stuck in a reactive loop with no clear path to improvement. ... Legacy SOCs rely heavily on outdated technologies and rule-based detection, generating high volumes of alerts, many of which are false positives, leading to analyst burnout. Analysts are compelled to manually inspect and triage a deluge of meaningless signals, making the entire effort unsustainable. ... Before transformation can happen, one needs to understand where one stands. This can be accomplished with key benchmarking metrics for SOC performance, such as MTTD (Mean time to detect), MTTR (Mean time to respond), case closure rates, and tool effectiveness. ... Agentic AI represents the next evolution of AI-powered cybersecurity, which is modular, explainable, and autonomous. Through a coordinated system of AI agents, the Agentic SOC continuously responds and adapts to the evolving security environment in real time. It is designed to accelerate threat detection, investigation, and response by 10x, bringing speed, precision, and clarity to every function of SecOps. Agentic AI is the technology shift that changes the game. Unlike traditional automation, Agentic AI is decision-oriented, self-improving, and always operating with human-in-the-loop for oversight.


3 SOC Challenges You Need to Solve Before 2026

2026 will mark a pivotal shift in cybersecurity. Threat actors are moving from experimenting with AI to making it their primary weapon, using it to scale attacks, automate reconnaissance, and craft hyper-realistic social engineering campaigns. ... Attackers have mastered evasion. ClickFix campaigns trick employees into pasting malicious PowerShell commands by themselves. LOLBins are abused to hide malicious behavior. Multi-stage phishing hides behind QR codes, CAPTCHAs, rewritten URLs, and fake installers. Traditional sandboxes stall because they can't click "Next," solve challenges, or follow human-dependent flows. Result? Low detection rates for the exact threats exploding in 2025 and beyond. ... Thousands of daily alerts, mostly false positives. An average SOC handles 11,000 alerts daily, with only 19% worth investigating, according to the 2024 SANS SOC Survey. Tier 1 analysts drown in noise, escalating everything because they lack context. Every alert becomes a research project. Every investigation starts from zero. Burnout hits hard. Turnover doubles, morale tanks, and real threats hide in the backlog. By 2026, AI-orchestrated attacks will flood systems even faster, turning alert fatigue into a full-blown crisis. ... From a financial leadership perspective, security spending often feels like a black hole: money is spent, but risk reduction is hard to quantify. SOCs are challenged to justify investments, especially when security teams seem to be a cost center without clear profit or business-driving impact.


Digital surveillance tools are reshaping workplace privacy, GAO warns

Privacy concerns intensify when surveillance data feeds into automated systems that evaluate performance, set productivity metrics, or flag workers for potential discipline. GAO found that employers often rely on flawed benchmarks and incomplete measurements. Tools rarely capture the full range of work performed, such as research, mentoring, reading, or off-screen tasks, and frequently misinterpret normal behavior as inefficiency. When employers trust these tools “at face value,” the report notes, workers can be unfairly labeled unproductive or noncompliant despite doing their jobs well. ... Meanwhile, past federal efforts to issue guidance on reducing surveillance related harms such as transparency practices, human oversight, and safeguards against discriminatory impacts have been rescinded or paused since January by the Trump administration as agencies reassess their policy priorities. GAO also notes that existing federal privacy protections are narrow. The Electronic Communications Privacy Act restricts covert interception of communications, but it does not cover most forms of digital monitoring, such as keystroke logging, location tracking, biometric data collection, or algorithmic productivity scoring. ... The report concludes that while digital surveillance can improve safety, efficiency, and health monitoring, its benefits depend wholly on how employers use it.


How to avoid becoming an “AI-first” company with zero real AI usage

A competitor declared they’re going AI-first. Another publishes a case study about replacing support with LLMs. And a third shares a graph showing productivity gains. Within days, boardrooms everywhere start echoing the same message: “We should be doing this. Everyone else already is, and we can’t fall behind.” So the work begins. Then come the task forces, the town halls, the strategy docs and the targets. Teams are asked to contribute initiatives. But if you’ve been through this before, you know there’s often a difference between what companies announce and what they actually do. Because press releases don’t mention the pilots that stall, or the teams that quietly revert to the old way, or even the tools that get used once and abandoned. ... By then, your company’s AI-first mandate will have set into motion departmental initiatives, vendor contracts and maybe even some new hires with “AI” in their titles. The dashboards will be green, and the board deck will have a whole slide on AI. But in the quiet spaces where your actual work happens, what will have meaningfully changed? Maybe you'll be like the teams that never stopped their quiet experiments. ... That’s invisible architecture of genuine progress: Patient, and completely uninterested in performance. It doesn't make for great LinkedIn posts, and it resists grand narratives. But it transforms companies in ways that truly last. Every organization is standing at the same crossroads right now: Look like you’re innovating, or create a culture that fosters real innovation.

Daily Tech Digest - November 23, 2025


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln



Lean4: How the theorem prover works and why it's the new competitive edge in AI

Lean4 is both a programming language and a proof assistant designed for formal verification. Every theorem or program written in Lean4 must pass a strict type-checking by Lean’s trusted kernel, yielding a binary verdict: A statement either checks out as correct or it doesn’t. This all-or-nothing verification means there’s no room for ambiguity – a property or result is proven true or it fails. ... Lean4’s value isn’t confined to pure reasoning tasks; it’s also poised to revolutionize software security and reliability in the age of AI. Bugs and vulnerabilities in software are essentially small logic errors that slip through human testing. What if AI-assisted programming could eliminate those by using Lean4 to verify code correctness? ... Beyond software bugs, Lean4 can encode and verify domain-specific safety rules. For instance, consider AI systems that design engineering projects. A LessWrong forum discussion on AI safety gives the example of bridge design: An AI could propose a bridge structure, and formal systems like Lean can certify that the design obeys all the mechanical engineering safety criteria. ... For enterprise decision-makers, the message is clear: It’s time to watch this space closely. Incorporating formal verification via Lean4 could become a competitive advantage in delivering AI products that customers and regulators trust. We are witnessing the early steps of AI’s evolution from an intuitive apprentice to a formally validated expert. 


How pairing SAST with AI dramatically reduces false positives in code security

In our opinion, the path to next-generation code security is not choosing one over the other, but integrating their strengths. So, along with Kiarash Ahi, founder, Virelya Intelligence Research Labs and the co-author of the framework, I decided to do exactly that. Our novel hybrid framework combines the deterministic rigor and the speed of traditional SAST with the contextual reasoning of a fine-tuned LLM to deliver a system that doesn’t just find vulnerabilities, but also validates them. ... The framework embeds the relevant code snippet, the data flow path and surrounding contextual information into a structured JSON prompt for a fine-tuned LLM. We fine-tuned Llama 3 8B on a high-quality dataset of vetted false positives and true vulnerabilities, specifically covering major flaw categories like those in the OWASP Top 10 to form the core of the intelligent triage layer. Based on the relevant security issue flagged, the prompt then asks a clear, focused question, such as, “Does this user input lead to an exploitable SQL injection?” ... A SAST and LLM synergy marks a necessary evolution in static code security. By integrating deterministic analysis with intelligent, context-aware reasoning, we can finally move past the false positive crisis and equip developers with a tool that provides high signal security feedback at the pace of modern development with LLMs.


Quantum Progress Demands Manufacturing Revolution, Martinis Says

Quantum computing’s next breakthroughs will come from factories, not physics labs, according to John Martinis ... He argued that a general-purpose quantum computer will require at least a million physical qubits, a number that is far beyond today’s devices and out of reach without a fundamental shift in how the hardware is built. ... Current machines rely on dense tangles of wires, components and cooling structures that dwarf the tiny chip at the bottom of the machine. He writes that “The complexity of the plumbing completely overwhelms the quantum device itself.” Martinis said the solution is to abandon today’s hand-built, research-lab approach and move to fully integrated chips similar to the transformation that turned 1960s mainframes into the microchips inside smartphones. The field, he argued, must invest in cryogenic integrated circuits that can operate at the ultra-low temperatures required for superconducting qubits. Using that approach, Martinis suggests that engineers could place about 20,000 qubits on a single wafer and reach the million-qubit scale by linking wafers together. That level of integration would also require abandoning manufacturing methods that date back more than half a century. He singled out the “lift-off” process still used in many quantum labs as too dirty and too limited for industrial-scale production.


Dream of quantum internet inches closer after breakthrough helps beam information over fiber-optic networks

"By demonstrating the versatility of these erbium molecular qubits, we're taking another step toward scalable quantum networks that can plug directly into today's optical infrastructure,” David Awschalom, the study's principal investigator and a professor of molecular engineering and physics at the University of Chicago, said in the statement. ... That's largely where the comparison ends, though. Whereas classical bits compute in binary 1s and 0s, qubits behave according to the weird rules of quantum physics, allowing them to exist in multiple states at once — a property known as superposition. A pair of qubits could, therefore, be 0-0, 0-1, 1-0 and 1-1 simultaneously. Qubits typically come in three forms: superconducting qubits, which are made from tiny electrical circuits; trapped ion qubits, which store information in charged atoms held in place by electromagnetic fields; and photonic qubits, which encode quantum states in particles of light. ... Operating at telecom wavelengths provides two key advantages, the first being that signals can travel long distances with minimal loss — vital for transmitting quantum data across fiber networks. The second is that light at fiber-optic wavelengths passes easily through silicon. If it didn't, any data encoded in the optical signal would be absorbed and lost. Because the optical signal can pass through silicon to detectors or other photonic components embedded beneath, the erbium-based qubit is ideal for chip-based hardware, the researchers said.


AWS Outage Fallout: Lessons In Resilience

The impact of the AWS outage has led to multiple warnings about the issues when relying on one cloud provider. But experts warn it’s important to keep in mind that moving to multi-cloud can also cause problems. Multi-cloud is “not the default answer,” says Ryan Gracey, partner and technology lawyer at law firm Gordons. “For a few crown jewel services, splitting across providers can reduce single-supplier risk and satisfy regulators, but it also raises cost and complexity, and opens new ways to fail. Chasing a lowest common denominator setup often means giving up the very features that make cloud attractive.” ... The takeaway from the latest outage is not just to buy more redundancy, says Gracey. “It’s about designing systems that bend, not break. They should slow down gracefully, drop non-essential features and protect the most important customer tasks when things go wrong. A part of this is running drills so teams know who decides what actions to take, what to say to customers and what to do first.” For the cloud service provider, it’s important to recognise where a potential single point of failure – or “race condition” in the case of AWS – may exist, says Jones. “AWS will be looking at its architecture to ensure single points of failure are eliminated and the potential blast radius of any incident is dramatically reduced.” Maintaining operations during outages requires “architectural and operational preparation,” says Nazir.


AI Is Not Just a Tool

At some point in every panel, someone leans into the microphone and says it: “AI is just a tool, like a camera.” It’s meant to end the argument, a warm blanket for anxious minds. Art survived photography; we’ll survive this. But it is wrong. A camera points at the world and harvests what’s already there. A modern AI system points at us and proposes a world — filling gaps, making claims, deciding what should come next. That difference is not semantics. It’s jurisdiction. ... A photo is protectable because a human author made it. Purely AI-generated material, absent sufficient human control, isn’t. The law refuses to pretend the prompt is the picture. That alone should retire the analogy. That doesn’t mean the output is “authorless”; it means the law refuses to pretend the user’s prompt equals human creative control. Cameras yield photographs authored by people; models yield artifacts whose legal status relies on the extent to which a human actually contributed. Different authorship rules = different things. ... The model is not a person, but it isn’t an empty pipe. It embodies choices that will be made (over and over) at human scale, with the same confidence we misread as competence. That’s why generative AI feels creative without being human. It performs composition: not presence, but pattern. It produces objects that look like testimony. Cameras can lie (through framing), but models conjecture. They create the very thing we then argue about.


Are Small Businesses at Risk by Outsourcing Parts of Their Operations?

When you outsource a function or department, you're doing more than simply delegating tasks. Every third-party vendor, managed service provider, virtual assistant, or consultant who requires access to your critical systems carries an element of risk; they're ostensibly a potential entry point into your business. ... Some organizations are bound by specific, stringent regulatory frameworks and standards, depending on their sector(s) of operation. Some remote-working IT or marketing contractors may not be subject to the same data privacy laws that govern your organization, for example. Similarly, an HR outsourcing provider may store employee information in cloud servers that are deemed security-compliant in some jurisdictions but not in others. These compliance gaps create additional security vulnerabilities that threat actors would actively exploit without hesitation if the opportunity arose. ... As AI becomes more ingrained into business operations, the process of outsourcing becomes increasingly gray. According to recent statistics, more than half of businesses have experienced AI-related security vulnerabilities. What's more, cybercriminals are harnessing generative AI technology to escalate and amplify their attacks. ... The biggest danger that SMBs face when outsourcing is the assumption that someone else is now responsible for upholding security standards. 


Why AI Integration in DevOps is so Important

Traditional DevOps pipelines rely heavily on a high degree of automated testing and monitoring. The drawback is that they often lack the machine intelligence needed to recognize new or evolving threats. AI addresses this gap by introducing learning-based security systems capable of real-time behavioral analysis. Instead of waiting for known vulnerabilities to appear or be actively exploited, these systems recognize the predicate behavior and code activity. Once detected, engineers are alerted before an incident occurs. Within DevOps, AI is able to fortify each stage of the process: Reviewing commits for suspicious or vulnerable code, monitoring container environment integrity and evaluating system logs for anomalies that may have escaped real-time recognition. Insights like these help teams locate weak spots and reduce the impact of human error over time. ... AI integration with existing CI/CD workflows gives DevOps teams real-time visibility into security risks. AI-powered automated scanners analyze components automatically. Source code, dependencies and container images are all scanned for hidden vulnerabilities before the build phase is complete. This helps identify issues that could otherwise slip through manual reviews. AI-driven monitoring tools also track activity across the entire delivery pipeline, identifying potential attacks such as credential theft, code injection or dependency poisoning. As these tools learn over time, they adapt to new threat behaviors that traditional scanners might overlook.


NTT: How Japan Leads in Cybersecurity Amid Rising Threats

The Active Cyber Defense Law passed in May 2025 is intended to minimise the damage caused by substantive cyberattacks that can compromise national security, while Japan has also established new requirements for critical infrastructure companies to enhance their cybersecurity practices under the revised Economic Security Promotion Act. ... Gen AI has lowered the bar for adversaries to launch cyberattacks, meaning defenders have no choice but to empower themselves to automate at least partially their tasks including log or phishing analysis, threat detection, behavioural analysis and incident report drafting. This is crucial for defenders who are overwhelmed by ever increasing work around the clock to minimize burnout risks. ... As Japanese companies are increasingly expanding their businesses globally, multiple firms have reported their overseas subsidiaries being hit by ransomware attacks in the United States, Vietnam, Thailand, Singapore and Taiwan. To manage supply chain risks and ensure business continuity, it is becoming more crucial than ever to ensure global governance in cybersecurity and keep proper data backups, the principle of least privilege and network segmentation. Surprisingly, Japan is the country where ransomware infection ratio is lowest amongst 15 major countries such as the United States, the United Kingdom, France and Germany. 


From Data Bottlenecks to Data Products: Building for Speed and Scale

As it stands now, the central data team oversees data quality only at the final stage, a process that is not currently working. This is because it has resulted in the domain team, who create the data, being the only ones who have the full context necessary for proper accuracy and integrity. If businesses shift left with their approach, app developers themselves will take responsibility for the data created by applications. By giving the producer ownership of the quality, ongoing issues can be stopped before trickling down into data dashboards or machine-learning models. Ultimately, this is more than just a technical change. Shifting left will be a culture change that moves toward Data Mesh principles. By embedding ownership and quality within the domains that produce and use data, organisations replace central gatekeeping with shared accountability. Each domain now becomes a creator and protector of reliable data, ensuring governance is built in from the start rather than enforced later. ... Understandably, giving ownership of data to the teams creating it may seem chaotic. But it isn’t about losing control over it; rather, it is about giving teams the freedom and tools to work faster and smarter. At the end stands the lighthouse vision of a self-service data platform where every consumer can independently generate insights for standard questions and only reach out for support when tackling more advanced analyses.

Daily Tech Digest - January 09, 2025

It’s remarkably easy to inject new medical misinformation into LLMs

By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned. This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web." ... rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term "incidental" data poisoning due to "existing widespread online misinformation." But a lot of that "incidental" information was generally produced intentionally, as part of a medical scam or to further a political agenda. ... Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence.


CIOs are rethinking how they use public cloud services. Here’s why.

Where are those workloads going? “There’s a renewed focus on on-premises, on-premises private cloud, or hosted private cloud versus public cloud, especially as data-heavy workloads such as generative AI have started to push cloud spend up astronomically,” adds Woo. “By moving applications back on premises, or using on-premises or hosted private cloud services, CIOs can avoid multi-tenancy while ensuring data privacy.” That’s one reason why Forrester predicts four out of five so called cloud leaders will increase their investments in private cloud by 20% this year. That said, 2025 is not just about repatriation. “Private cloud investment is increasing due to gen AI, costs, sovereignty issues, and performance requirements, but public cloud investment is also increasing because of more adoption, generative AI services, lower infrastructure footprint, access to new infrastructure, and so on,” Woo says. ... Woo adds that public cloud is costly for workloads that are data-heavy because organizations are charged both for data stored and data transferred between availability zones (AZ), regions, and clouds. Vendors also charge egress fees for data leaving as well as data entering a given AZ. “So for transfers between AZs, you essentially get charged twice, and those hidden transfer fees can really rack up,” she says. 


What CISOs Think About GenAI

“As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.” However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud. Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year. “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”


Scaling RAG with RAGOps and agents

To maximize their effectiveness, LLMs that use RAG also need to be connected to sources from which departments wish to pull data – think customer service platforms, content management systems and HR systems, etc. Such integrations require significant technical expertise, including experience with mapping data and managing APIs. Also, as RAG models are deployed at scale they can consume significant computational resources and generate large amounts of data. This requires the right infrastructure as well as the experience to deploy it, as well as the ability to manage data it supports across large organizations. One approach to mainstreaming RAG that has AI experts buzzing is RAGOps, a methodology that helps automate RAG workflows, models and interfaces in a way that ensures consistency while reducing complexity. RAGOps enables data scientists and engineers to automate data ingestion and model training, as well as inferencing. It also addresses the scalability stumbling block by providing mechanisms for load balancing and distributed computing across the infrastructure stack. Monitoring and analytics are executed throughout every stage of RAG pipelines to help continuously refine and improve models and operations.


Navigating Third-Party Risk in Procurement Outsourcing

Shockingly, only 57% of organisations have enterprise-wide agreements that clearly define which services can or cannot be outsourced. This glaring gap highlights the urgent need to create strong frameworks – not just for external agreements, but also for intragroup arrangements. Internal agreements, though frequently overlooked, demand the same level of attention when it comes to governance and control. Without these solid frameworks, companies are leaving themselves exposed to risks that could have been mitigated with just a little more attention to detail. Ongoing monitoring is also crucial to TPRM; organisations must actively leverage audit rights, access provisions and outcome-focused evaluations. This means assessing operational and concentration risks through severe yet plausible scenarios, ensuring they’re prepared for the worst-case while staying vigilant in everyday operations. ... As the complexity of third-party risk grows, so too does the role of AI and automation. The days of relying on spreadsheets and homegrown databases are long gone. Ed’s thoughts on this topic are unequivocal: “AI and automation are critical as third-party risk becomes increasingly complex. Significant work is required for initial risk assessments, pre-contract due diligence, post-contract monitoring, SLA reviews and offboarding.”


Five Ways Your Platform Engineering Journey Can Derail

Chernev’s first pitfall is when a company tries to start platform engineering by only changing the name of its current development practices, without doing the real work. “Simply rebranding an existing infrastructure or DevOps or SRE practice over to platform engineering without really accounting for evolving the culture within and outside the team to be product-oriented or focused” is a huge mistake ... Another major pitfall, he said, is not having and maintaining product backlogs — prioritized lists of work for the development team — that are directly targeting your developers. “For the groups who have backlogs, they are usually technology-oriented,” he said. “That misalignment in thinking across planning and missing feedback loops is unlikely to move progress forward within the organization. That ultimately leads the initiative to fail to deliver business value. Instead, they should be developer-centric,” said Chernev. ... This is another important point, said Chernev — companies that do not clearly articulate the value-add of their platform engineering charter to both technical and non-technical stakeholders inside their operations will not fully be able to reap the benefits of the platform’s use across the business.


Building generative AI applications is too hard, developers say

Given the number of tools they need to do their job, it’s no surprise that developers are loath to spend a lot of time adding another to their arsenal. Two thirds of them are only willing to invest two hours or less in learning a new AI development tool, with a further 22% allocating three to five hours, and only 11% giving more than five hours to the task. And on the whole, they don’t tend to explore new tools very often — only 21% said they check out new tools monthly, while 78% do so once every one to six months, and the remaining 2% rarely or never. The survey found that they tend to look at around six new tools each time. ... The survey highlights the fact that, while AI and generative AI are becoming increasingly important to businesses, the tools and techniques require to develop them are not keeping up. “Our survey results shed light on what we can do to help address the complexity of AI development, as well as some tools that are already helping,” Gunnar noted. “First, given the pace of change in the generative AI landscape, we know that developers crave tools that are easy to master.” And, she added, “when it comes to developer productivity, the survey found widespread adoption and significant time savings from the use of AI-powered coding tools.”


AI infrastructure – The value creation battleground

Scaling AI infrastructure isn’t just about adding more GPUs or building larger data centers – it’s about solving fundamental bottlenecks in power, latency, and reliability while rethinking how intelligence is deployed. AI mega clusters are engineering marvels – data centers capable of housing hundreds of thousands of GPUs and consuming gigawatts of power. These clusters are optimized for machine learning workloads with advanced cooling systems and networking architectures designed for reliability at scale. Consider Microsoft’s Arizona facility for OpenAI: with plans to scale up to 1.5 gigawatts across multiple sites, it demonstrates how these clusters are not just technical achievements but strategic assets. By decentralizing compute across multiple data centers connected via high-speed networks, companies like Google are pioneering asynchronous training methods to overcome physical limitations such as power delivery and network bandwidth. Scaling AI is an energy challenge. AI workloads already account for a growing share of global data center power demand, which is projected to double by 2026. This creates immense pressure on energy grids and raises urgent questions about sustainability.


4 Leadership Strategies For Managing Teams In The Metaverse

Leaders must develop new skills and adopt innovative strategies to thrive in the metaverse. Here are some key approaches:Invest in digital literacy—Leaders must become fluent in the tools and technologies that power the metaverse. This includes understanding VR/AR platforms, blockchain applications and collaborative software such as Slack, Trello and Figma. Emphasize inclusivity—The metaverse has the potential to democratize access to opportunities, but only if it’s designed with inclusivity in mind. Leaders should ensure that virtual spaces are accessible to employees of all abilities and backgrounds. This might include providing hardware like VR headsets or ensuring platforms support diverse communication styles. Create rituals for connection—Leaders can foster connection through virtual rituals and gatherings in the absence of physical offices. These activities, from weekly team check-ins to informal virtual “watercooler” chats, help build camaraderie and maintain a sense of community. Focus on well-being—Effective leaders prioritize employee well-being by setting clear boundaries, encouraging breaks and supporting mental health.


How AI will shape work in 2025 — and what companies should do now

“The future workforce will likely collaborate more closely with AI tools. For example, marketers are already using AI to create more personalized content, and coders are leveraging AI-powered code copilots. The workforce will need to adapt to working alongside AI, figuring out how to make the most of human strengths and AI’s capabilities. “AI can also be a brainstorming partner for professionals, enhancing creativity by generating new ideas and providing insights from vast datasets. Human roles will increasingly focus on strategic thinking, decision-making, and emotional intelligence. ... “Companies should focus on long-term strategy, quality data, clear objectives, and careful integration into existing systems. Start small, scale gradually, and build a dedicated team to implement, manage, and optimize AI solutions. It’s also important to invest in employee training to ensure the workforce is prepared to use AI systems effectively. “Business leaders also need to understand how their data is organized and scattered across the business. It may take time to reorganize existing data silos and pinpoint the priority datasets. To create or effectively implement well-trained models, businesses need to ensure their data is organized and prioritized correctly.



Quote for the day:

"The world is starving for original and decisive leadership." -- Bryant McGill

Daily Tech Digest - December 08, 2024

Here’s the one thing you should never outsource to an AI model

One of the biggest dangers in letting AI take the reins of your product ideation process is that AI processes content — be it designs, solutions or technical configurations — in ways that lead to convergence rather than divergence. Given the overlapping bases of training data, AI-driven R&D will result in homogenized products across the market. Yes, different flavors of the same concept, but still the same concept. Imagine this: Four of your competitors implement gen AI systems to design their phones’ user interfaces (UIs). Each system is trained on more or less the same corpus of information — data scraped from the web about consumer preferences, existing designs, bestseller products and so on. What do all those AI systems produce? Variations of a similar result. What you’ll see develop over time is a disturbing visual and conceptual cohesion where rival products start mirroring one another. ... In platforms like ArtStation, many artists have raised concerns regarding the influx of AI-produced content that, instead of showing unique human creativity, feels like recycled aesthetics remixing popular cultural references, broad visual tropes and styles. This is not the cutting-edge innovation you want powering your R&D engine.


How much capacity is in aging data centers?

Individual data centers have considerable differences between them, and one of the most critical is their size. With this weighting factor, the average moves — but not by much. The “average megawatt” is 10.2 years old. Whereas older data centers (10-plus years) represent 48 percent of the survey sample, they contain 38 percent of the total IT capacity — still a large minority. Interestingly, a more dramatic shift occurs within the population of data centers that have been operating for less than 10 years — well within the typical design lifespan. By facility count alone, there is an even split between the data centers that are one to five years old and those that have been in operation for six to ten years. But when measuring in megawatts, the newest data centers hold significantly more capacity (38 percent) than those with six to ten years of service. This is intuitive; in the past five years, some data center projects have reached unprecedented sizes. Very recent builds are overshadowing the capacity of data centers that are only slightly older, even though the designs are not dramatically different. However, the weighted figures above suggest that even this massive build-out has not yet overcome the moderating influence of much older, potentially less efficient facilities.


Generative AI is making traditional ways to measure business success obsolete

Often touted as the “iron triangle” from the perspective of operational efficiency, this equation implies that, in order to attain a degree of quality, firms must balance cost with the time spent to achieve that level of quality. ... AI has upended this thinking, as firms can now achieve both speed and accuracy at the same time by leveraging AI. This can enhance productivity and drive innovation without losing out on quality. Likewise, through generative AI, smaller companies with fewer resources are able to rub shoulders and compete with larger firms using AI-powered tools. They can do this by streamlining operations, creating cost-effective marketing content and delivering personalised customer experiences. This can make existing businesses more efficient, competitive and creative. It can also lower the barriers to entry into markets for prospective small and medium-sized business owners. ... The UK government’s recent autumn budget included a number of tax rises that will hit businesses, especially some small and medium-sized enterprises (SMEs) that don’t have the financial buffers to weather severe economic challenges. Generative AI has reconfigured the Cost x Time = Quality formula and has enabled firms to do things both quickly and accurately without a trade off.


UK Cyber Risks Are ‘Widely Underestimated,’ Warns Country’s Security Chief

“What has struck me more forcefully than anything else since taking the helm at the NCSC is the clearly widening gap between the exposure and threat we face, and the defences that are in place to protect us,” he said. “And what is equally clear to me is that we all need to increase the pace we are working at to keep ahead of our adversaries.”  ... Horne added that the guidance and frameworks drawn up by the NCSC are not widely used. Ultimately, businesses need to change their perspective on cyber security from a “necessary evil” or “compliance function” to “an integral part of achieving their purpose.” ... “The defence and resilience of critical infrastructure, supply chains, the public sector and our wider economy must improve” to protect against these nation-state threats, Horne said. Ian Birdsey, partner and cyber specialist at law firm Clyde & Co, told TechRepublic in an email: “The UK has increasingly become a target for hostile nations due to the redrawing of geopolitical battle lines and the rise in global conflicts in recent years. In turn, threat actors based in those territories are increasingly launching more severe and sophisticated cyberattacks on UK organisations, particularly within critical national infrastructure and its supply chain.


5 JavaScript Libraries You Should Say Goodbye to in 2025

jQuery is the grandparent of modern JavaScript libraries, loved for its cross-browser support, simple DOM manipulation, and concise syntax. However, in 2025, it’s time to officially let go. Native JavaScript APIs and modern frameworks like React, Vue, and Angular have rendered jQuery’s core utilities obsolete. Not to mention, vanilla JavaScript now includes native methods such as querySelector, addEventListener, and fetch that more conveniently provide the functionality we once relied on jQuery to deliver. Also, modern browsers have standardized, making the need for a cross-browser solution like jQuery redundant. Not to mention, bundling jQuery into an application today can add unnecessary bloat, slowing down load times in an age when speed is king. ... Moment.js was the default date-handling library for a long time, and it was celebrated for its ability to parse, validate, manipulate, and display dates. However, it’s now heavy and inflexible compared to newer alternatives, not to mention it’s been deprecated. Moment.js clocks in at around 66 KB (minified), which can be a significant payload in an era where smaller bundle sizes lead to faster performance and better UX.


How media, publishing and entertainment organizations can master Data Governance in the age of AI

One of the reasons AI governance has proven to be such a challenging new discipline is that it’s so multifaceted. Tiankai explained that it’s comprised of several key elements: Ownership and stewardship: AI models need ownership, and so does AI governance. The right people must be accountable for ensuring AI models are used in the right ways. Cross-functional decision-making: A cross-domain thinking and decision-making model is essential. One central function can’t make every AI-relevant governance decision, so you need ways to bring the accountable people together. Processes and metadata: Teams must make their models explainable, so everyone can understand the quality of their outputs and the root causes of any negative outcomes. Technology enablement: Technology must support governance frameworks and make them work at scale. This shows that AI governance requires a combination of people, process and technology change. The panel agreed that the ‘people’ element is the toughest to manage effectively. Nathalie Berdat, Head of Data and AI Governance, BBC, explained some of the people-specific challenges that she has encountered along its AI governance journey. 


5 ways to tell people what to do at work

Nick Woods, CIO of airport group MAG, said dialogue is the priority for any professional who wants to avoid ambiguity. "If you're telling somebody what to do, you're already in the wrong place," he said. "Success is about a coaching, conversational dialogue that you need to have that ultimately comes down to a handshake on, 'Are we clear on what's next?'" Woods told ZDNET that most management decisions involve an ongoing debate. He doesn't believe in being directive about outputs and telling people what they need to go and do. "I think I'm much more in a space of, 'Actually, I've hired good people. I'm going to allow you to go and tell me what we need to do, and then we're going to have a dialogue about it,'" he said. ... Niall Robinson, head of product innovation at the Met Office, said talented staff should be given space to express their creativity. "There's a temptation as a leader to tell people how to do stuff -- and that can be a trap," he said. Robinson told ZDNET that he focuses on avoiding that problem by trusting his staff to generate recommended actions. "A habit I've been trying to practice is to tell people what success looks like and then giving them the agency to describe the options to me because they're closer to many of the solutions. So, success is about giving people the power to advise me."


Navigating NextGen Enterprise Architecture with GenAI

GenAI can modernize technology architecture by facilitating optimal best-of-breed solutions selection based on diverse criteria deep analyses. It offers tailored guidance aligned with business requirements as well as key capabilities such as scalability, resilience, and reversibility. This dynamic capacity adapts to evolving IT landscapes and business requirements, continuously refining recommendations based on the changing need and technological state-of-art. Moreover, GenAI accelerates homemade solutions development by generating code snippets. It produces-free functions and classes code segments written in any programming language, which improves efficiency and reduces manual coding efforts. This capacity improves developers' productivity and allows teams to focus more on high-level design. It also ensures that generated code is aligned with coding standards related to maintainability, readability, collaboration, and consistency. GenAI has amazing advantages, but it also has some major challenges. One of them is sustainability issues, which are increasingly important in technology adoption. In fact, many enterprises take this criterion into account in their technology architecture principles and assess it when they select a new solution to enhance their IT landscape.


The 7 R's of cloud migration: How to choose the right method

The R's model isn't new, but it has evolved significantly over the years. Its genesis is usually attributed to Gartner, who came up with the 5 R's model back in 2010. The original five were rehost, refactor, revise, rebuild and replace. As the cloud continued to evolve and more diverse workloads were being migrated to the cloud, AWS added a sixth R -- retire -- and eventually, a seventh, for retain. This seventh R is effectively an acknowledgment that not all workloads are suited to being hosted in the cloud. ... Rehosting can be done in a few ways, but it often means creating cloud-based virtual machines that mimic the infrastructure an application is currently running on. ... Rehosting an application requires you to create a cloud VM instance and then move the application onto that instance. Relocating, on the other hand, involves moving an existing VM from an on-premises environment to the cloud without making significant changes to it. ... A workload might be suitable for retirement if it is no longer actively supported by the vendor. In such cases, it's important to make sure you have a workaround before retiring an application the organization still uses. That might mean adopting a competing application that offers similar functionality or developing one in-house.


Evolving Your Architecture: Essential Steps and Tools for Modernization

Tech debt, lack of modernization can also get you out there in the news, and not as a very good thing, as we could see for SWA a couple years ago when they had a pretty huge meltdown with their booking systems and all that. It damaged their image, but also got them pretty down on their plans in revenue and all that, and still, nowadays they are facing the consequences of that meltdown, which was basically because of ignoring and putting aside the conversations about tech debt and application modernization as a whole. ... It's basically looking at the inventory of applications that you have in your organization, and understanding, what are the critical ones? What is the value that it adds? Alignment with the business goals. Really like, is it commodity? Can I just go and buy one out of the shelf, two? Then it's fine, go and buy it. If it's something that differentiates you, you got to innovate, then it might be worth looking at building it and hence modernizing it. ... The other thing is the age of technology. If you have outdated technology, you very likely have vulnerabilities. If you have lack of support, either from the community or the vendors, there is a security vulnerability there, but there is no security patch being released because there is no support anymore.



Quote for the day:

"Do something today that your future self will thank you for." -- Unknown