Showing posts with label code quality. Show all posts
Showing posts with label code quality. Show all posts

Daily Tech Digest - April 30, 2026


Quote for the day:

"You've got to get up every morning with determination if you're going to go to bed with satisfaction." --George Lorimer

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 15 mins • Perfect for listening on the go.


The dreaded IT audit: How to get through it and what to avoid

The article "The dreaded IT audit: how to get through it and what to avoid" from IT Pro encourages organizations to reframe the auditing process as a strategic business asset rather than a burdensome cost center. Successfully navigating an audit requires maintaining a comprehensive, up-to-date inventory of all technology assets—including those used by remote workforces—to ensure security, safety, and insurance compliance. Even startups should establish structured auditing processes, as these evaluations proactively identify vulnerabilities and optimize operational efficiency. To streamline the experience, the article recommends prioritizing high-risk areas, such as software licensing, and utilizing customized spot checks instead of repetitive, standardized reviews that may fail to uncover meaningful insights. Crucially, leaders must adopt an open-minded approach to findings; the goal is to engage in transparent discussions about discovered issues rather than becoming defensive. Key pitfalls to avoid include treating the audit as a one-time administrative hurdle, relying on outdated manual tracking methods, and ignoring the gathered data. Instead, organizations should leverage audit results to inform staff training and drive practical improvements. By viewing the audit as a strategic opportunity for growth, companies can significantly strengthen their cybersecurity posture and ensure long-term sustainability in a digital economy.


Privacy in the AI era is possible, says Proton's CEO, but one thing keeps him up at night

In a wide-ranging interview at the Semafor World Economy Summit, Proton CEO Andy Yen addressed the critical tension between the rapid advancement of artificial intelligence and the fundamental right to digital privacy. Yen voiced significant concerns regarding the current AI trajectory, arguing that the industry's reliance on massive data harvesting inherently threatens individual security. He advocated for a paradigm shift toward "privacy-first AI," where processing occurs locally on user devices or through end-to-end encrypted frameworks to ensure that personal information remains inaccessible to service providers. Unlike the advertising-driven models of Silicon Valley giants, Yen highlighted Proton’s commitment to a subscription-based business model, which avoids the ethical pitfalls of monetizing user data. He also explored the "privacy paradox," observing that while users value their data, they often succumb to the convenience of free platforms. To counter this, Proton is expanding its ecosystem with tools like encrypted email and small language models designed specifically for security. Ultimately, Yen emphasized that the future of the digital economy hinges on stricter regulatory enforcement and the adoption of decentralized technologies that empower users with absolute control over their information, rather than treating them as products to be sold.


Outsourcing contracts weren't built for AI. CIOs are renegotiating now

The rapid advancement of generative artificial intelligence is necessitating a major overhaul of IT outsourcing agreements, as traditional contracts centered on headcount and billable hours prove incompatible with AI-driven efficiency. This InformationWeek article explains that while service providers promise productivity gains of up to 70%, legacy full-time equivalent (FTE) models fail to account for this increased output, leading CIOs to aggressively renegotiate for outcome-based pricing. This shift allows organizations to pay for specific results rather than human time, yet it introduces significant legal complexities. Key concerns include data sovereignty—where proprietary data might inadvertently train a provider's large language model—and intellectual property risks regarding the ownership of AI-generated code. Furthermore, the ability of AI to automate routine tasks is prompting some enterprises to bring previously outsourced functions back in-house, as smaller internal teams can now manage workloads that once required massive offshore cohorts. To navigate these challenges, technical leaders are implementing "gain-sharing" frameworks and rigorous governance standards to manage risks like AI hallucinations and liability. Ultimately, CIOs are assuming a more central role in procurement to ensure that vendor incentives align with genuine innovation and that the financial benefits of automation are captured by the enterprise.


Bad bots make up 40% of internet traffic

The "2026 Thales Bad Bot Report: Bad Bots in the Agentic Age" reveals a transformative shift in internet traffic, where automated activity now accounts for 53% of all web interactions, surpassing human traffic for the second consecutive year. Malicious "bad bots" alone comprise 40% of global traffic, highlighting a growing threat landscape. A critical finding is the 12.5x surge in AI-driven bot attacks, fueled by the rapid adoption of agentic AI which blurs the lines between legitimate and harmful automation. These advanced bots are increasingly targeting APIs, with 27% of attacks now bypassing traditional interfaces to exploit backend logic directly at machine speed. The financial services sector remains the most vulnerable, suffering 24% of all bot attacks and nearly half of all account takeover incidents. Thales experts, including Tim Chang, emphasize that the primary security challenge has evolved from simple bot identification to the complex analysis of behavioral intent. As AI agents emerge as a new traffic category, organizations must transition to proactive, intent-based defenses that can distinguish between helpful AI agents and malicious automation. This machine-driven era necessitates deeper visibility into API traffic and identity systems to maintain trust and security across modern digital infrastructures.


Incentive drift: Why transformation fails even when everything looks green

In the article "Incentive Drift: Why Transformation Fails Even When Everything Looks Green," Mehdi Kadaoui explores the paradoxical failure of IT transformations that appear successful on paper. The central challenge is "incentive drift"—the structural separation of authority from accountability that leads organizations to optimize for project delivery rather than business value. This drift manifests through several destructive patterns: the "ownership vacuum," where strategy and execution are disconnected; the "budgetary firewall," which isolates capital spending from operational costs; and "language capture," where success definitions are subtly redefined to ensure "green" status. Kadaoui argues that "collective amnesia" often follows, as organizations quietly lower their expectations to avoid acknowledging failure. To resolve this, he proposes making drift "structurally expensive" through three key mechanisms. First, a "value prenup" requires operational leaders to explicitly own and sign off on intended outcomes before development begins. Second, a "cost mirror" forces transparency across budget ledgers. Finally, a "semantic anchor" ensures original goals are read aloud in every governance meeting to prevent meaning erosion. By grounding digital transformation in rigid accountability and linguistic clarity, leadership can ensure that technological outputs translate into genuine, durable enterprise value.


How to Be a Great Data Steward: 6 Core Skills to Build

The article "Core Data Stewardship Skills to Build" emphasizes that effective data stewardship requires a unique blend of technical proficiency, business acumen, and interpersonal skills. High-performing stewards act as "purple people," bridging the gap between IT and business by translating complex technical standards into actionable business practices. Key operational activities include identifying and documenting Critical Data Elements (CDEs), aligning them with precise business terms, and performing data profiling to identify quality issues. Beyond basic documentation, stewards must master data classification to ensure regulatory compliance with frameworks like GDPR or HIPAA. Analytical thinking is essential for interpreting patterns and uncovering root causes of data inconsistencies, while strong communication skills enable stewards to foster a collaborative, data-driven culture. Furthermore, literacy in adjacent domains such as metadata management, master data management (MDM), and the use of modern data catalogs is vital. Ultimately, the role is outcome-driven; stewards do not just manage data for its own sake but focus on ensuring data health to drive measurable organizational value. By combining attention to detail with strategic consistency, data stewards serve as the essential operational guardians who transform raw data into a reliable, high-quality strategic asset for their organizations.


Researchers unearth industrial sabotage malware that predated Stuxnet by 5 years

Researchers from SentinelOne recently uncovered a sophisticated malware framework, dubbed "Fast16," that predates the infamous Stuxnet worm by five years. Active as early as 2005, this discovery shifts the timeline of state-sponsored industrial sabotage, proving that nation-states were deploying cyberweapons against physical infrastructure much earlier than previously understood. Unlike typical espionage tools designed for data theft, Fast16 was engineered for strategic sabotage by targeting high-precision floating-point arithmetic operations within engineering modeling software. By corrupting the logic of the Floating Point Unit (FPU), the malware produced subtly altered outputs in complex simulations, potentially leading to catastrophic real-world failures. The researchers identified three specific targeted engineering programs, including one previously associated with Iran’s AMAD nuclear program and another widely used in Chinese structural design. The modular nature of Fast16, which utilizes encrypted Lua bytecode, underscores its advanced design and national importance. This finding highlights a historical precedent for cyberattacks on critical workloads in fields such as advanced physics and nuclear research. Ultimately, Fast16 serves as a significant harbinger for modern industrial sabotage, demonstrating that the transition from strategic espionage to physical disruption in cyberspace was already in full swing two decades ago, long before Stuxnet gained global notoriety.


How AI Is Transforming Business Continuity and Crisis Response

Charlie Burgess’s article, "How AI Is Transforming Business Continuity and Crisis Response," explores the pivotal role of artificial intelligence in navigating the complexities of modern digital and physical risks. As businesses face increasingly non-linear threats, from supply chain disruptions to cyber incidents, the abundance of generated data often leads to information overload. AI addresses this by acting as a sophisticated data analysis tool that parses vast information streams to identify hidden patterns and suppress low-priority noise. This allows crisis teams to focus on critical alerts and early warning signs. Furthermore, AI enhances situational awareness and coordination by correlating disparate system inputs and surfacing standardized playbook responses. During active incidents, technologies like AI-powered cameras provide real-time visibility, aiding in personnel safety and evacuation efforts. Beyond immediate response, AI suggests optimized recovery paths and strategic resource allocation, fostering long-term operational resilience. Ultimately, the integration of AI is not intended to replace human judgment but to empower decision-makers with actionable insights and agility. By bridging the gap between data collection and decisive action, AI transforms business continuity from a reactive necessity into a proactive, evidence-based strategic asset that safeguards both personnel and organizational stability in an unpredictable global landscape.


Europe Gliding Toward Mandatory Online Age Verification

The European Commission is accelerating its push toward mandatory online age verification, driven by the Digital Services Act's requirements to protect minors from harmful content. Central to this initiative is a new age assurance framework and a "technically ready" open-source mobile app designed to allow users to prove they are over a certain age using national identity documents without disclosing their full identity. However, this transition faces intense scrutiny. Security researchers recently identified significant vulnerabilities in the commission's prototype app, labeling it "easily hackable." Furthermore, privacy advocates, such as representatives from Tuta, warn that centralized age verification creates a lucrative "gold mine" for hackers, potentially exacerbating risks like phishing and identity theft. Despite these concerns, European officials like Henna Virkkunen emphasize that the DSA demands concrete action over mere terms of service, particularly following allegations that platforms like Meta have failed to adequately exclude children under thirteen. As several European nations consider raising minimum age requirements for social media, the commission continues to advocate for "robust and non-discriminatory" verification tools that can be integrated into national digital wallets, insisting that ongoing security testing will eventually yield a reliable solution for safeguarding the digital environment for children.


CodeGuardian: A Model Context Protocol Server for AI-Assisted Code Quality Analysis and Security Scanning

"CodeGuardian: A Model Context Protocol Server for AI-Assisted Code Quality Analysis and Security Scanning" introduces a breakthrough tool designed to integrate enterprise-grade security and quality checks directly into AI-powered development environments. Authored by Madhvesh Kumar and Deepika Singh, the article details how CodeGuardian leverages the Model Context Protocol (MCP) to extend coding assistants with eleven specialized analysis tools. This integration eliminates the friction of context-switching by allowing developers to execute security scans, identify hardcoded secrets across multiple layers, and generate compliant Software Bill of Materials (SBOM) using simple natural language prompts. Unlike traditional static analysis tools that merely flag issues, CodeGuardian provides context-aware, "drop-in" code remediations tailored to a project's specific framework and style. A core feature is its cross-layer security reporting, which aggregates findings into a single risk score, exposing systemic vulnerabilities that isolated scanners often miss. By shifting security "left" into the immediate coding workflow, the tool empowers developers to build more resilient software while maintaining high delivery velocity. Ultimately, CodeGuardian represents a pivot toward "agentic" security, where AI assistants act as proactive guardians of code integrity throughout the development lifecycle, effectively bridging the gap between rapid feature delivery and robust organizational compliance.

Daily Tech Digest - January 27, 2026


Quote for the day:

"Supreme leaders determine where generations are going and develop outstanding leaders they pass the baton to." -- Anyaele Sam Chiyson



Why code quality should be a C-suite concern

At first, speed feels like progress. Then the hidden costs begin to surface: escalating maintenance effort, rising incident frequency, delayed roadmaps and growing organizational tension. The expense of poor code slowly eats into return on investment — not always in ways that show up neatly on a spreadsheet, but always in ways that become painfully visible in daily operations. ... During the planning phase, rushed architectural decisions often lead to tightly coupled, monolithic systems that are expensive and risky to change. During development, shortcuts accumulate into what we call technical debt: duplicated logic, brittle integrations and outdated dependencies that appear harmless at first but quietly erode system stability over time. Like financial debt, technical debt compounds. ... Architecture always comes first. I advocate for modular growth — whether through a well- structured modular monolith that can later evolve into microservices, or through service-oriented architectures with clear domain boundaries. Platforms such as Kubernetes enable independent scaling of components, but only when the underlying architecture is cleanly segmented. Language and framework choices matter more than most leaders realize. ... The technologies we select, the boundaries we define and the failure modes we anticipate all place invisible limits on how far an organization can grow. From what I’ve seen, you simply cannot scale a product on a foundation that was never designed to evolve.


How to regulate social media for teens (and make it stick)

Noting that age assurance proposals have broad support from parents and educators, Allen says “the question is not whether children deserve safeguarding (they do) but whether prohibition is an effective tool for achieving it.” “History suggests that bans succeed or fail not on the basis of intention, but on whether they align with demand, supply, moral legitimacy and enforcement capacity. Prohibition does not remove human desire; it reallocates who fulfils it. Whether that reallocation reduces harm or increases it depends on how well policy engages with the underlying economics and psychology of behaviour.” ... “There is little evidence that young people themselves view social media as morally repugnant. On the contrary, it is where friendships are maintained, identities are explored and social status is negotiated. That does not mean it is harmless. It means it is meaningful.” “This creates a problem for prohibition. Where demand remains strong, supply will be found.” Here, Allen’s argument falters somewhat, in that it follows the logic that says bans push kids onto less regulated and more dangerous platforms. I.e., “the risk is not simply that prohibition fails. It is that it succeeds in changing who supplies children’s social connectivity.” The difference is that, while a basket of plums and some ingenuity are all you need to produce alcohol, social media platforms have their value in the collective. Like Star Trek’s Borg, they are more powerful the more people they assimilate. 


The era of agentic AI demands a data constitution, not better prompts

If a data pipeline drifts today, an agent doesn't just report the wrong number. It takes the wrong action. It provisions the wrong server type. It recommends a horror movie to a user watching cartoons. It hallucinates a customer service answer based on corrupted vector embeddings. ... In traditional SQL databases, a null value is just a null value. In a vector database, a null value or a schema mismatch can warp the semantic meaning of the entire embedding. Consider a scenario where metadata drifts. Suppose your pipeline ingests video metadata, but a race condition causes the "genre" tag to slip. Your metadata might tag a video as "live sports," but the embedding was generated from a "news clip." When an agent queries the database for "touchdown highlights," it retrieves the news clip because the vector similarity search is operating on a corrupted signal. The agent then serves that clip to millions of users. At scale, you cannot rely on downstream monitoring to catch this. By the time an anomaly alarm goes off, the agent has already made thousands of bad decisions. Quality controls must shift to the absolute "left" of the pipeline. ... Engineers generally hate guardrails. They view strict schemas and data contracts as bureaucratic hurdles that slow down deployment velocity. When introducing a data constitution, leaders often face pushback. Teams feel they are returning to the "waterfall" era of rigid database administration.


QA engineers must think like adversaries

Test engineers are now expected to understand pipelines, cloud-native architectures, and even prompt engineering for AI tools. The mindset has become more preventive than detective. AI has become part of QA’s toolkit, helping predict weak spots and optimise testing. At the same time, QA must validate the integrity and fairness of AI systems — making it both a user and a guardian of AI. ... With DevOps, QA became embedded into the pipeline — automated test execution, environment provisioning, and feedback loops are all part of CI/CD now. With SecOps, we’re adding security scans and penetration checks earlier, creating a DevTestSecOps model. QA is no longer a separate stage. It’s a mindset that exists throughout the lifecycle — from requirements to observability in production. ... Regression testing has become AI-augmented and data-driven. Instead of re-running all test cases, systems now prioritise based on change impact analysis. The SDET role is also evolving — they now bridge coding, observability, and automation frameworks, often owning quality gates within CI/CD. ... Security checks are now embedded as automated gates within pipelines. Performance testing, too, is moving earlier — with synthetic monitoring and API-level load simulations. In effect, security and speed can coexist, provided teams integrate validation rather than treat it as an afterthought.


The biggest AI bottleneck isn’t GPUs. It’s data resilience

The risks of poor data resilience will be magnified as agentic AI enters the mainstream. Whereas generative AI applications respond to a prompt with an answer in the same manner as a search engine, agentic systems are woven into production workflows, with models calling each other, exchanging data, triggering actions and propagating decisions across networks. Erroneous data can be amplified or corrupted as it moves between agents, like the party game “telephone.” ... Experts cite numerous reasons data protection gets short shrift in many organizations. A key one is an overly intense focus on compliance at the expense of operational excellence. That’s the difference between meeting a set of formal cybersecurity metrics and being able to survive real-world disruption. Compliance guidelines specify policies, controls and audits, while resilience is about operational survivability, such as maintaining data integrity, recovering full business operations, replaying or rolling back actions and containing the blast radius when systems fail or are attacked. ... “Resilience and compliance-oriented security are handled by different teams within enterprises, leading to a lack of coordination,” said Forrester’s Ellis. “There is a disconnect between how prepared people think they are and how prepared they actually are.” ... Missing or corrupted data can lead models to make decisions or recommendations that appear plausible but are far off the mark. 


When open science meets real-world cybersecurity

If there is no collaboration, usually the product that emerges is a great scientific specimen with very risky implementations. The risk is usually caught by normal cyber processes and reduced accordingly; however, scientists who see the value in IT/cyber collaboration usually also end up with a great scientific specimen. There is also managed risk in the implementation with almost no measurable negative impacts or costs. We’ve seen that if collaboration is planned into the project very early on, cybersecurity can provide value. ... Cybersecurity researchers often are confused and look for issues on the internet where they stumble onto the laboratory IT footprint and make claims that we are leaking non-public information. We clearly label and denote information that is releasable to the public, but it always seems there are folks who are quicker to report than to read the dissemination labels. ... Encryption at rest (EIR) is really a control to prevent data loss when the storage medium is no longer in your control. So, when the data has been reviewed for public release, we don’t spend the extra time, effort, and money to apply a control to data stores that provide no value to either the implementation or to a cyber control. ... You can imagine there are many custom IT and OT parts that run that machine. The replacement of components is not on a typical IT replacement schedule. This can present longer than ideal technology refresh cycles. The risk here is that integrating modern cyber technology into an older IT/OT technology stack has its challenges.


4 issues holding back CISOs’ security agendas

CISOs should aim to have team members know when and how to make prioritization calls for their own areas of work, “so that every single team is focusing on the most important stuff,” Khawaja says. “To do that, you need to create clear mechanisms and instructions for how you do decision-support,” he explains. “There should be criteria or factors that says it’s high, medium, low priority for anything delivered by the security team, because then any team member can look at any request that comes to them and they can confidently and effectively prioritize it.” ... According to Lee, the CISOs who keep pace with their organization’s AI strategy take a holistic approach, rather than work deployment to deployment. They establish a risk profile for specific data, so security doesn’t spend much time evaluating AI deployments that use low-risk data and can prioritize work on AI use cases that need medium- or high-risk data. They also assign security staffers to individual departments to stay on top of AI needs, and they train security teams on the skills needed to evaluate and secure AI initiatives. ... the challenge for CISOs not being about hiring for technical skills or even soft skills, but what he called “middle skills,” such as risk management and change management. These skills he sees becoming more crucial for aligning security to the business, getting users to adopt security protocols, and ultimately improving the organization’s security posture. “If you don’t have [those middle skills], there’s only so far the security team can go,” he says.


Rethinking data center strategy for AI at scale

Traditional data centers were engineered for predictable, transactional workloads. Your typical enterprise rack ran at 8kW, cooled with forced air, powered through 12-volt systems. This worked fine for databases, web applications, and cloud storage. Yet, AI workloads are pushing rack densities past 120kW. That's not an incremental change—it's a complete reimagining of what a data center needs to be. At these densities, air cooling becomes physically impossible. ... Walk into a typical data center today. The HVAC system has its own monitoring dashboard. Power distribution runs through a separate SCADA system. Compute performance lives in yet another tool. Network telemetry? Different stack entirely. Each subsystem operates in isolation, reporting intermittently through proprietary interfaces that don't talk to each other. Operators see dashboards, not decisions. ... Cooling systems can respond instantly to thermal changes, and power orchestration becomes adaptive rather than provisioned for theoretical peaks. AI clusters can scale based not just on demand, but in coordination with available power, cooling capacity, and network bandwidth. ... Real-time visibility, unified data architectures, and adaptive control will define performance, efficiency, and competitiveness in AI-ready data centers. The organizations that thrive in the AI era won't necessarily be those with the most data centers or the biggest chips; they'll be the ones that treat infrastructure as an intelligent, responsive system capable of sensing, adapting, and optimizing in real time.


Microsoft handed over BitLocker keys to law enforcement, raising enterprise data control concerns

The US Federal Bureau of Investigation approached Microsoft with a search warrant in early 2025, seeking keys to unlock encrypted data stored on three laptops in a case of alleged fraud involving the COVID unemployment assistance program in Guam. As the keys were stored on a Microsoft server, Microsoft adhered to the legal order and handed over the encryption keys ... While the encryption of BitLocker is robust, enterprises need to be mindful of who has custody of the keys, as this case illustrates. ... Enterprises using BitLocker should treat the recovery keys as highly sensitive, and avoid default cloud backup unless there is a clear business requirement and the associated risks are well understood and mitigated. ... CISOs should also ensure that when devices are repurposed, decommissioned, or moved across jurisdictions, keys should be regenerated as part of the workflow to ensure old keys cannot be used. ... If recovery keys are stored with a cloud provider, that provider may be compelled, at least in its home jurisdiction, to hand them over under lawful order, even if the data subject or company is elsewhere without notifying the company. This becomes even more critical from the point of view of a pharma company, semiconductor firm, defence contractor, or critical-infrastructure operator, as it exposes them to risks such as exposure of trade secrets in cross‑border investigations.


Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing. ... Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is. Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity. ... For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

Daily Tech Digest - November 15, 2025


Quote for the day:

“Be content to act, and leave the talking to others.” -- Baltasa



Why engineering culture should be your top priority, not your last

Most engineering leaders treat culture like an HR checkbox, something to address after the roadmap is set and the features are prioritized. That’s backwards. Culture directly affects how fast your team ships code, how often bugs make it to production, and whether your best developers are still around when the next major project kicks off. ... Many engineering leaders are Boomers or Gen X. They built their careers in environments where you kept your head down, shipped your code, and assumed no news was good news. That approach worked for them. It doesn’t work for the developers they’re managing now. This creates a perception problem that compounds the engagement gap. Most C-suite leaders say they put employee well-being first. Most employees don’t see it that way. Only 60% agree their employer actually prioritizes their well-being. The gap matters because employees who think their company cares more about output than people feel overwhelmed nearly three-quarters of the time. When employees feel supported, that number drops to just over half. That difference is where attrition starts. ... Most engineering teams try to fix retention with the same approach that worked decades ago, when people stayed at companies for years and stability mattered more than engagement. That’s not how careers work anymore. The typical response is to roll out generic culture programs designed for large enterprises. 


Integrated deployment must become the default

It’s intuitive that off-site and modular construction models reduce on-site build timelines in general construction, but we are observing the benefits within the data center space being amplified due to the increased density of services catering to larger rack loads. One of the main deterrents to modular adoption has been the perception of limited scalability and design repetition, combined with the inefficiency of transporting large volumes of unused space, essentially “shipping air.” As a result, traditional stick-build methods have long remained the default approach. But that’s all changing. The services, be it telecom, electrical, or cooling, are getting bigger, heavier, and more densely packed, and the timeframe needed is being whittled down, so naturally the emphasis has moved towards fully integrated solutions. These systems are assembled and commissioned offsite wherever possible, then delivered ready for installation with minimal site work required. Offsite integration also negates a lot of the complexities of trade-to-trade sequencing and handover of areas, which absorb site resources and hinder programme delivery. When systems arrive pre-aligned, factory-tested, and installation-ready on-site, activity shifts from coordination and correction to simple assembly. The cumulative impact is significant: reduced project timelines, fewer site dependencies, and greater confidence in delivery schedules.


The Myth Of Executive Alignment: Why Top Teams Need Honesty, Not Harmony

The idea that executive teams should think alike is comforting but unrealistic. Direction needs coherence, but total agreement usually means someone stopped speaking up. Lencioni has said that real clarity can’t be manufactured through slogans or slide decks. “Alignment and clarity,” he wrote, “cannot be achieved in one fell swoop with a series of generic buzzwords and aspirational phrases crammed together.” The strongest teams I’ve seen operate through visible, respected tension. Finance pushes for discipline. Strategy pushes for expansion. Risk pushes for protection. Culture pushes for capacity. Together they form an internal ecosystem of checks and balances. Call it necessary misalignment or structured divergence—it’s what keeps a company honest. The work isn’t to erase difference but to make it safe. ... Executive behavior multiplies downward. When the top team loses coherence, the entire system learns to mimic its caution. Lencioni has often written that when trust is strong, conflict transforms. “When there is trust,” he explained, “conflict becomes nothing but the pursuit of truth.” And the reward for that truth, he reminds us, is organizational health. “The single greatest advantage any company can achieve,” Lencioni wrote, “is organizational health.” Those two ideas—truth and health—connect directly with Gallup’s research. They’re not soft metrics; they’re what make trust and accountability visible.


Why Cybersecurity Jobs Are Likely To Resist AI Layoff Pressures: Experts

The bottom line is that there will “always” be a need for a significant number of cybersecurity professionals, Edross said. “I do not believe this technology will ever make the human obsolete.” The notion that SOC analyst jobs and other roles requiring security expertise might be at risk would have been unthinkable just a few years ago — making the sudden shift to discussions around AI-driven redundancy for humans in the SOC all the more startling. “If you go back about two years ago, there’s this constant hum in the industry that we have a few million less cybersecurity professionals than we need,” Palo Alto Networks CEO Nikesh Arora said. ... “AI still has a significant propensity to make mistakes, which in the security world is quite problematic,” said Boaz Gelbord, senior vice president and chief security officer of Akamai. “So you’re always going to need a human check on that.” At the same time, human orchestration of the AI systems will be an ongoing necessity as well, according to experts. “You need that creativity. You need to understand and piece together and review the LLM’s work,” said Dov Yoran, co-founder and CEO of Command Zero, a startup offering an LLM-powered cyber investigation platform. “I don’t see how the human goes away.” And while entry-level security analysts may find parts of their roles becoming redundant due to AI, most organizations will want to continue employing them, if only to prepare them to become higher-tier analysts over time, Yoran said.


MCP doesn’t move data. It moves trust

Many assume MCP will replace APIs, but it can’t and shouldn’t. MCP defines how AI models can safely call tools; APIs remain the mechanisms that connect those tools to the real world. Without APIs, an MCP-enabled AI can think, reason and recommend, but it can’t act. Without MCP, those same APIs remain open highways with no traffic rules. Autonomy requires both. MCP will give rise to a new class of enterprise software: AI control planes that sit between reasoning and execution. These systems will combine access policy, auditing, explainability and version control — the governance scaffolding for safe autonomy. But governance alone isn’t enough. Logging requests does not make them effective. Without APIs, MCP remains a supervisory layer, not an operational one. The future belongs to systems that can both decide responsibly and act reliably. ... MCP will not eliminate complexity. It will simply move it — from data management to decision management. The challenge ahead is to make that complexity visible, traceable and accountable. In enterprise AI, the real challenge is no longer technical feasibility; it’s moral architecture. The question is shifting from what AI can do to what it should be allowed to do. ... MCP represents the architecture of restraint, a new language of control between reasoning and reality. APIs will keep moving data. MCP will govern how intelligence uses it. And when those two layers work in harmony, enterprises will finally move from systems that record what happened to systems that make things happen.


AI Copilots for Good Governance and Efficient Public Service Delivery

While AI copilots hold immense potential for public service delivery, several challenges must be addressed before large-scale adoption can be facilitated in India. While India’s digital and policy landscape provides fertile ground for AI copilots, several challenges need to be addressed to ensure their responsible and effective adoption. One of the foremost concerns is data privacy and security. Copilots in governance will inevitably process large volumes of sensitive personal and financial data from citizens and businesses. Without adequate safeguards, this raises risks of misuse, unauthorised access, or surveillance overreach. The Digital Personal Data Protection Act, 2023, establishes a strong legal framework for data fiduciaries. Yet, its principles must be operationalised through privacy-preserving sandboxes, anonymised training datasets, and clear consent mechanisms tailored for AI-driven interfaces. ... Equally pressing is the challenge of algorithmic bias and fairness. AI copilots, if trained on unbalanced or non-representative datasets, can perpetuate linguistic, gender, or regional biases, disadvantaging marginalised users. To prevent such inequities, India’s AI governance could mandate fairness audits, algorithmic transparency, and explainability in all government-deployed copilots. This may be complemented by inclusive design standards that ensure accessibility across India’s diverse languages and digital contexts. 


Fighting AI with AI: Adversarial bots vs. autonomous threat hunters

Attackers already have systemic advantages that AI amplifies dramatically. While there are some great examples of how AI can be used for defense, these methods, if used against us, could be devastating. ... It’s hard to gain context at that scale. Most companies have multiple defensive layers — and they all have flaws. Using weaknesses in those layers, attackers weave through them and create attack paths. The question is: How are we finding those paths before they do? ... The use of AI bots within a digital twin enables continuous, multi-threaded threat hunting and attack path validation without impacting production environments. This addresses the prioritization challenges that security and IT teams struggle with in a meaningful way. Really, digital twins offer the same benefits to security teams as physical twins provided to NASA scientists more than 55 years ago: accurate simulations of how a given change might impact large, complex and highly dynamic attack surfaces. Plus, it’s exciting to imagine how the UX might evolve to help defenders visualize what’s happening in unprecedented ways. ... AI is a truly transformational technology and it’s exciting to think about how AI defense can evolve over the next few years. I encourage product builders to think big. Why not draw inspiration from science fiction? 


AI is shaking up IT work, careers, and businesses - and here's how to prepare

"AI opened a whole new can of worms for security," said Tsai. "Overall, the demand for IT jobs is going to increase at three times the rate of all jobs." This generally presents a positive outlook for the IT industry, but it's also fueling a shift in how companies conduct hiring and what they are looking for. Spiceworks previewed its 2026 State of IT report, a survey that gathers insights from over 800 IT professionals at small and medium-sized companies on current trends, and found that the skills most in demand are reflecting the growth of AI. ... "If you are in IT, perhaps upleveling your skills, learning about AI is a very smart thing to do now. It can make you very productive, and it can help you do more or less," said Tsai. Taking it upon yourself to do this work is especially important because, as I cited during the panel, companies are investing a lot of money into AI solutions, but training is increasingly left behind or not prioritized. ... "When it comes to AI, whether it is bringing in completely and maybe doing a small language model to AI, or doing inferencing, or you can run many of the LLMs internally," said Rapozza. "Businesses are building up your construction to support those kinds of things." Does this level of investment mean companies are seeing an immediate ROI? Not exactly, but there is progress being made in that direction. As Rodrigo Gazzaneo, senior GTM Specialist, generative AI, Amazon Web Services (AWS), noted, companies are already seeing positive outcomes.


A developer’s Hippocratic Oath: Prioritizing quality and security with the fast pace of AI-generated coding

In the context of the medical field, physicians are taught ‘do no harm,’ and what that means is their highest duty of care is to make sure that the patient is first, and that they do not conduct any sort of treatments on the patient without first validating that that’s what’s best for the patient, ... The responsibility for software engineers is similar; When they’re asked to make a change to the codebase, they need to first understand what they’re being asked to do and make sure that’s the best course of action for the codebase. “We’re inundated with requests,” Johnson said. “Product managers, business partners, customers are demanding that we make changes to applications, and that’s our job, right? It’s our job to build things that provide humanity and our customers and our businesses value, but we have to understand what is the impact of that change. How is it going to impact other systems? Is it going to be secure? Is it going to be maintainable? Is it going to be performant? Is it ultimately going to help the customer?” ... “We all love speed, right? But faster coding is not actually producing a high quality product being shipped. In fact, we’re seeing bottlenecks and lower quality code.” He went on to say that testing is the discipline that could be most transformed by generative AI. It is really good at studying the code and determining what tests you’re missing and how to improve test coverage.


API Key Security: 7 Enterprise-Proven Methods to Prevent Costly Data Breaches

To prevent API keys from leaking, the first and foremost rule is, as you guessed, never store them in the code. Embedding API keys directly in client-side code or committing them to version control systems is, no doubt, a recipe for disaster: Anyone who can access the code or the repository can steal the keys. ... Implementing an API key storage system? Out of the question, because securely storing and managing API keys bring tremendous operational overhead, like storage overhead, management overhead, usage overhead, and distribution overhead. ... API Gateways, like AWS API Gateway, Kong, etc., are designed to solve these problems, simplifying and centralizing the management of all APIs, providing a single entry point for all requests. Features like limiting, throttling, and DDoS protection are baked in; API gateways can also provide centralized logging and monitoring; they even provide more features like input validation, data masking, and response filtering. ... All the above practices enhance API security in either the usage/storage or production environment, but there is another area where API keys could be compromised: the continuous integration/continuous deployment systems and pipelines. By nature, CI/CD involves running automation scripts and executing commands in a non-interactive way, which sometimes requires API keys, and this means the keys need to be stored somewhere and passed to the pipelines at runtime.

Daily Tech Digest - June 10, 2024

AI vs humans: Why soft skills are your secret weapon

AI can certainly assist with some aspects of the creative process, but true creativity is something only humans can achieve, for several reasons. Firstly, it often involves intuition, emotion and empathy, as well as thinking outside the box and making connections between seemingly unrelated concepts. Creativity is often shaped by personal experiences and cultural background, making every individual’s creative work unique. ... Leadership and strategic management will continue to be driven by humans. When making decisions, people are able to consider various factors such as personal relationships or company culture. General awareness, intuition, understanding of broader contexts that lie beyond data and effective communication skills are all human traits. ... Humans possess a crucial trait that AI is unable to replicate (although it’s definitely coming closer): Empathy. AI can’t communicate with your team members at the same level, provide solutions to their problems or offer a listening ear when necessary. Managing a team means talking to people, listening and understanding their needs and motivations. The human touch is essential to make sure that everyone is on the same page. 


How to Avoid Pitfalls and Mistakes When Coding for Quality

When code quantity is so exaggerated that redundancies emerge, "code bloat" occurs. An abundance of unnecessary code can adversely affect the site's performance and the code can become too complex to maintain. There are strategies for addressing redundancy; however, as code is implemented, it is crucial for it to be modularized or broken down into smaller modular complements with the proper encapsulation and extraction. Code that is modularized promotes reuse, simplifies maintenance, and keeps the size of the code base in check. ... There is a tendency to "reinvent the wheel" when writing code. A more practical solution is to reuse libraries whenever possible because they can be utilized within different parts of the code. Sometimes, code bloat results from a historically bloated code base without an easy option to conduct modularization, extraction, or library reuse. In this case, the most effective strategy is to turn to code refactoring. Regularly take initiatives to refactor code, eliminate any unnecessary or duplicate logic, and improve the overall code structure of the repository over time. 


The BEC battleground: Why zero trust and employee education are your best line of defence

Even with extensive employee training, some BEC scams can bypass human vigilance. Comprehensive security processes are essential to minimize their impact. The zero-trust security model is crucial here. It assumes no inherent trust for anyone, inside or outside the network. With zero trust, every user and device must be continuously authenticated before accessing any resources. This makes it much harder for attackers. Even if they steal a login credential, they can’t automatically access the entire system. A key component of zero trust is multi-factor authentication (MFA) which acts as multiple locks on every access point. Just like a physical security system requiring multiple forms of identification, MFA requires not just a username and password, but an additional verification factor like a code from a phone app or fingerprint scan. This makes unauthorised entry, including through BEC scams, much harder. So, any IT infrastructure implemented must have zero trust and MFA at its core. A complement to zero trust is the principle of least privilege access; granting users only the minimum level of access required to perform their jobs. 


Why CISOs need to build cyber fault tolerance into their business

For a rapidly evolving technology like GenAI, it is impossible to prevent all attacks at all times. The ability to adapt to, respond, and recover from inevitable issues is critical for organizations to explore GenAI successfully. Therefore, effective CISOs are complementing their prevention-oriented guidance for GenAI with effective response and recovery playbooks. Regarding third-party cybersecurity risk management, no matter the cybersecurity function’s best efforts, organizations will continue to work with risky third parties. Cybersecurity’s real impact lies not in asking more due diligence questions, but in ensuring the business has documented and tested third-party-specific business continuity plans in place. “CISOs should be guiding the sponsors of third-party partners to create a formal third-party contingency plan, including things like an exit strategy, alternative suppliers list, and incident response playbooks,” said Mixter. “CISOs tabletop everything else. It’s time to bring tabletop exercises to third-party cyber risk management.”


AI system poisoning is a growing threat — is your security regime ready?

CISOs shouldn’t breathe a sigh of relief, McGladrey says, as their organizations could be impacted by those attacks if they are using the vendor-supplied corrupted AI systems. ... Security experts and CISOs themselves say many organizations are not prepared to detect and respond to poisoning attacks. “We’re a long way off from having truly robust security around AI because it’s evolving so quickly,” Stevenson says. He points to the Protiviti client that suffered a suspected poisoning attack, noting that workers at that company identified the possible attack because its “data was not synching up, and when they dived into it, they identified the issue. [The company did not find it because] a security tool had its bells and whistles going off.” He adds: “I don’t think many companies are set up to detect and respond to these kinds of attacks.” ... “The average CISO isn’t skilled in AI development and doesn’t have AI skills as a core competency,” says Jon France, CISO with ISC2. Even if they were AI experts, they would likely face challenges in determining whether a hacker had launched a successful poisoning attack.


Accelerate Transformation Through Agile Growth

The problem is that when you start the next calendar year in January, you get a false sense of confidence because December is still 12 months away — all the time in the world, or so it seems, to execute your annual strategic plan. But then by April, after the first quarter has ended, chances are you’ll have started to feel a bit behind. You won’t be overly worried, however; you know you still have plenty of time to catch up. But then you’ll get to September and hit the 100-day-sprint which typically comes right after Labor Day in the United States. Now, panic will set in as you race to the end of the year desperately trying to hit those annual goals that were established all the way back in January. In growth cycles longer than 90 days, we tend to get off track. But it doesn’t have to be this way. You can use the 90-Day Growth Method to bring your team together every quarter to review and celebrate your progress over the past 90 days, refocus on goals and actions, and renew your commitment to achieving them. Soon, you and your team will feel re-energized and ready to move forward with courage and confidence for the next 90 days.


We need a Red Hat for AI

To be successful, we need to move beyond the confusing hype and help enterprises make sense of AI. In other words, we need more trust (open models) and fewer moving parts ... OpenAI, however popular it may be today, is not the solution. It just keeps compounding the problem with proliferating models. OpenAI throws more and more of your data into its LLMs, making them better but not any easier for enterprises to use in production. Nor is it alone. Google, Anthropic, Mistral, etc., etc., all have LLMs they want you to use, and each seems to be bigger/better/faster than the last, but no clearer for the average enterprise. ... You’d expect the cloud vendors to fill this role, but they’ve kept to their preexisting playbooks for the most part. AWS, for example, has built a $100 billion run-rate business by saving customers from the “undifferentiated heavy lifting” of managing databases, operating systems, etc. Head to the AWS generative AI page and you’ll see they’re lining up to offer similar services for customers with AI. But LLMs aren’t operating systems or databases or some other known element in enterprise computing. They’re still pixie dust and magic.


How Data Integration Is Evolving Beyond ETL

From an overall trend perspective, with the explosive growth of global data, the emergence of large models, and the proliferation of data engines for various scenarios, the rise of real-time data has brought data integration back to the forefront of the data field. If data is considered a new energy source, then data integration is like the pipeline of this new energy. The more data engines there are, the higher the efficiency, data source compatibility, and usability requirements of the pipeline will be. Although data integration will eventually face challenges from Zero ETL, data virtualization, and DataFabric, in the visible future, the performance, accuracy, and ROI of these technologies have always failed to reach the level of popularity of data integration. Otherwise, the most popular data engines in the United States should not be SnowFlake or DeltaLake but TrinoDB. Of course, I believe that in the next 10 years, under the circumstances of DataFabric x large models, virtualization + EtLT + data routing may be the ultimate solution for data integration. In short, as long as data volume grows, the pipelines between data will always exist.


Protecting your digital transformation from value erosion

The first form of value erosion pertains to cost increases within your project without an equivalent increase in the value or activities being delivered. With project delays, for example, there are usually additional costs incurred related to resource carryover because of the timeline increase. In this instance, the absence of additional work being delivered, or future work being pulled forward to offset the additional costs, is a prime illustration of value erosion. ... Decrease in value without decreased costs: A second form occurs when there’s a decrease in value without a cost adjustment. This can happen due to changing business priorities or project delays, especially within the build phase. As an alternative to extending the project timeline, organizations may decide to prioritize and reduce features to meet deadlines. ... Failure to Identify and plan for potential risks leaves projects vulnerable to unforeseen complications and budgetary concerns. Large variances in initial SI responses can be attributed to different assumptions on scope and service levels provided. 


Ask a Data Ethicist: What Is Data Sovereignty?

Put simply, data sovereignty relates to who has the power to govern data. It determines who is legally empowered to make decisions about the collection and use of data. We can think about this in the context of two governments negotiating between each other, each having sovereign powers of self-determination. Indigenous governments are claiming their sovereign rights to their people’s data. On the one hand, this is a response to the atrocities that have taken place with respect to data gathered and taken beyond the control of Indigenous communities by researchers, governments, and other non-Indigenous parties. Yet, as data becomes increasingly important, many countries are seeking to set regulatory standards for data. It makes sense the Indigenous governments would assert similar rights with respect to their people’s data. ... Data sovereignty is an important part of Canada’s Truth and Reconciliation calls to action. The FNIGC governs the relevant processes for those seeking to work with First Nations in Canada to appropriately access data.



Quote for the day:

"The secret to success is good leadership, and good leadership is all about making the lives of your team members or workers better." -- Tony Dungy

Daily Tech Digest - March 21, 2023

CFO Priorities This Year: Rethinking the Finance Function

Marko Horvat, Gartner VP of research, adds CFOs must transition away from optimization and start thinking about transformation. “Making things faster, more accurate, and with less effort has benefits, but each round of improvement brings diminishing returns,” he says. “CFOs must start thinking about ways to transform the function to build and enhance capabilities, such as advanced data and analytics, in order to truly unlock more value from the finance function.” Sehgal says CFOs should be asking questions including, how do we create a futuristic vision for finance? Should short-term gains override longer-term benefits? And how do we fund digital transformation with the current pressures? “CFOs are focused on elevating the role of finance in the organization to be a value integrator across the enterprise, as well as enhancing value through new strategies that not only support development but that also promote innovations for capital allocation,” he explains.


Build Software Supply Chain Trust with a DevSecOps Platform

When building an application, developers, platform operators and security professionals want to monitor vulnerabilities throughout the software supply chain. The challenge comes when multiple vulnerability scanners are used at different stages in the pipeline and different teams are notified and required to take action without proper coordination. A security-focused application platform can build in scan orchestration to not only detect vulnerabilities but also to map those findings to a workload. This feature allows developers to identify issues throughout the life cycle of their applications and help them resolve issues, shifting left the responsibility with a higher degree of automation. Moreover, the platform can build trust with security analysts by showing the performance of application developers and helping them understand the risk that teams are facing. Once a platform detects these vulnerabilities, both at build time and at runtime, it needs to help developers triage and remediate them. 


Developers, unite! Join the fight for code quality

Writing good code is a craft as much as any other, and should be regarded as such. You have every right to advocate for an environment and an operational model that respect the intricacies of what you do and the significance of the outcome. It’s important to value, and feel valued for, what you do. And not just for your own immediate happiness—it’s also a long-term investment in your career. Making things you don’t think are any good tends to wear on the psyche, which doesn’t exactly feed into a more motivated workday. In fact, a study conducted by Oxford University’s Saïd Business School found that happy workers were 13% more productive. What’s good for your craft is ultimately best for business—a conclusion both engineers and their employers can feel good about. Software plays a big role at just about every level of society—it’s how we create and process information, access goods and services, and entertain ourselves. With the advent of software-defined vehicles, it even determines how we move between physical locations.


Why data literacy matters for business success

Aligning data strategies with overall business strategy and operations is no mean feat. Chief Data Officers (CDOs) are ideal candidates in marrying together data analytics and the wider business, given their appreciation of informed decision making, and the desire to foster a data culture where internal information is properly managed and engaged with throughout the organisation. Moreover, their understanding of the technology landscape will assist when making platform and software selections. This stands to benefit all departments, who’ll gain access to the tools and skills needed to work with data and derive insights. CDOs also embody the “can do” approach to professional development, believing it’s possible to train employees in data-related skills, regardless of their technical proficiency. There’s a well-established correlation between hiring a CDO and business success, with research from Forrester suggesting 89% of organisations harnessing analytics to improve operations that appointed one to oversee the process have seen a positive business impact.


What the 'new automation' means for technology careers

AI is already playing a part in handling technology tasks. A survey released by OpsRamp finds more than 60% of companies adopting AIOps, which applies AI to monitor and improve IT operations themselves. The greatest IT operations challenge for enterprises in 2023 was automating as many operations as possible, cited by 66% of respondents. The main benefits of AIOps seen so far include reduction in open incident tickets (65%); reduction in mean time to detect or restore (56%), and automation of tedious tasks (52%). The latest IT staffing data from Janco Associates finds recent layoffs affected data center and operations staff, with business leaders looking to automate IT processes and reporting. The apparent trend here is that those pursuing careers in technology need to look higher up the stack -- at applications and business consulting. However, there's still a lot of work for people working with the plumbing and code. Unfortunately, getting to automation-driven abstraction -- especially if it involves AI -- requires some manual work up front.


How Cybersecurity Delays Critical Infrastructure Modernization

For critical infrastructure organizations, building a security strategy that works from both an operational technology (OT) and consumer data perspective is not as straightforward as it is in many other industries. Safely storing this data while implementing the latest technology has proved to be a significant challenge across the sector, meaning the service provided by these companies is being hampered. These concerns have prevented a range of technologies from being integrated quickly or at all. These technologies include renewable energy projects, electric vehicle technology, natural disaster contingencies and moving towards smarter grid solutions to replace aging infrastructure. Older operational technology becomes difficult to update and secure sufficiently while the use of third-party software also reduces the level of control organizations have over their data. In addition to this, a lack of automation increases the chances of human error, which could present opportunities to cybercriminals.


What Are Foundation AI Models Exactly?

The generative AI solution can analyze input data against 175 billion parameters and profoundly understand the written language. The smart tool can answer questions, summarize and translate text, produce articles on a given topic, write code, and much more. All you need is to provide ChatGPT with the right prompts. OpenAI’s groundbreaking product is just one example of foundation models that transform AI application development as we know it. Foundation models disrupt AI development as we know it. Instead of training multiple models for separate use cases, you can now leverage a pre-trained AI solution to enhance or fully automate tasks across multiple departments and job functions. With foundation AI models like ChatGPT, companies no longer have to train algorithms from scratch for every task they want to enhance or automate. Instead, you only need to select a foundation model that best fits your use case – and fine-tune its performance for a specific objective you’d like to achieve.


As hiring freezes and layoffs hit, tech teams struggle to do more with less

There are a number of organizational hurdles holding back employees’ learning and development, Pluralsight found. For HR and L&D directors, budget restraints and costs were identified as the biggest barriers to upskilling (30%). This was also true for technology leaders, with 15% blaming financial restraints for getting in the way of employee upskilling. For technology workers themselves, finding time to invest in their own training was identified as the main issue: 42% of workers said they were too busy to upskill, with 18% saying their manager didn’t allow any time during the week to learn new skills. As a result, 21% of tech workers feel pressured to learn outside of work hours. ... However, the report added that giving employees time to invest in their training, address skills gaps and gain valuable growth opportunities are key factors in retention. “Upskilling during work hours will hinder short-term productivity, and managers often bear the brunt of this stress. But don’t sacrifice short-term productivity for long-term success,” the report said.


CISA kicks off ransomware vulnerability pilot to help spot ransomware-exploitable flaws

CISA says it will seek out affected systems using existing services, data sources, technologies, and authorities, including CISA's Cyber Hygiene Vulnerability Scanning. CISA initiated the RVWP by notifying 93 organizations identified as running instances of Microsoft Exchange Service with a vulnerability called "ProxyNotShell," widely exploited by ransomware actors. The agency said this round demonstrated "the effectiveness of this model in enabling timely risk reduction as we further scale the RVWP to additional vulnerabilities and organizations." Eric Goldstein, executive assistant director for cybersecurity at CISA, said, "The RVWP will allow CISA to provide timely and actionable information that will directly reduce the prevalence of damaging ransomware incidents affecting American organizations. We encourage every organization to urgently mitigate vulnerabilities identified by this program and adopt strong security measures consistent with the U.S. government's guidance on StopRansomware.gov."


A Simple Framework for Architectural Decisions

Technology Radar captures techniques, platforms, tools, languages and frameworks, and their level of adoption across an organization. However, this may not cover all the needs. Establishing consistent practices for things that apply across different parts of the system can be helpful. For example, you might want to ensure all logging is done in the same format and with the same information included. Or, if you’re using a REST API, you might want to establish some conventions around how it should be designed and used, like what headers to use or how to name things. Additionally, if you’re using multiple similar technologies, it can be useful to guide when to use each one. Technology Standards define the rules for selecting and using technologies within your company. They ensure consistency, reduce the risk of adopting new technology in a suboptimal way, and drive consistency across the organization.



Quote for the day:

"Leadership is not about titles, positions, or flow charts. It is about one life influencing another." -- John C. Maxwell

Daily Tech Digest - October 17, 2022

Get ready for the metaverse

“The metaverse presents an opportunity to more fully transcend our physical limitations,” says Anand Srivatsa, CEO of Tobii. “Technologies like eye tracking will play a critical role in helping reduce the need for compute and networking power, which are required to deliver lifelike, immersive virtual environments. Eye tracking will also help users express their attention and intent in more realistic ways when they’re in the digital universe.” ... If human-digital devices enable the experience, and infrastructure supports metaverse-scale interactivity, then it’s how real the experience feels to users that will be the primary innovation and differentiator. To start, organizations will need strong dataops capabilities, and machine learning models will likely require synthetic data generation. Zuk continues, “Businesses looking to make waves in the metaverse usually begin by establishing a robust data pipeline—with synthetic data as the primary resource driving the development life cycle.” Bart Schouw, chief evangelist at Software AG, agrees.


Cybercriminals are having it easy with phishing-as-a-service

Phishing-as-a-service is a fairly new phenomenon, this trend is where the cybercriminal actually takes the role of a service provider, carrying out attacks for others instead of just for themselves in exchange for a sum of money. PaaS only serves to show how hackers are becoming better organized and looking for greater monetisation from ransomware. Instead of threat actors being required to have technical knowledge of building or taking over infrastructure to host a phishing kit (login page emulating known login interfaces like Facebook/Amazon/Netflix/OWA), the barrier to entry is significantly lowered with the introduction of PaaS. ... Phishing-as-a-service can be very advanced, with capabilities spanning from detecting sandbox environments, to fingerprinting user agents in order to determine whether you might be a researchers bot. That being said, Web Content Filters can often limit the exposure of users.


Top 5 Data Science Trends That Will Dominate 2023

Automation plays a significant role in transforming the world. It has stimulated various transformations in business, resulting in sustained proficiency. In the past few years, the best automation capabilities have been provided by the industrialisation of big data analytics. The process of Analytic Process Automation (APA) encourages growth by providing prescriptive and predictive abilities along with other insights to businesses. Through this, businesses have been able to receive excellence with efficient results and low costs. Analytic Process Automation mainly enhances computing power to make good right decisions. Data analytics automation can be considered a perfect disruptive force. Big data analysis helps substantially with stimulating valuable data usage and productivity. ... Data Governance handles data access all over the world. General Data Protection Regulation (GDPR) compliance has various organizations and businesses that prioritize data governance and handles the data of consumers.


Code Red: the Business Impact of Code Quality

The main problem with technical debt is that code lacks visibility. Code is an abstract concept that isn’t accessible to all members of your organization. Hence, it’s easy to ignore technical debt even if we are aware of the general problem. Quantifying and visualizing the situation in your codebase is key, both for the engineering teams as well as for product and management. Visualisations are wonderful as they let us tap into the most powerful pattern detector that we have in the known universe: the human brain. I explored the concept at depth in Your Code as a Crime Scene, and founded CodeScene back in 2015 to make the techniques available to a general audience. ... With code health and hotspots covered, we have everything we need for taking it full circle. Without a quantifiable business impact, it’s hard to make the case for investing in technical debt paydowns. Any measures we use risk being dismissed as vanity metrics while the code continues to deteriorate. We don’t want that to happen.
Those at the cutting edge of ML are increasingly turning to synthetic data to circumvent the numerous constraints of original or real-world data. For instance, company Synthesis AI offers a cloud-based generation platform that delivers millions of perfectly labeled and diverse images of artificial people. Synthesis AI has been able to accomplish many challenges that come with the messy reality of original data. For a start, the company makes the data cheaper. ... The challenges of real-world data don’t end there. In some fields, huge historical bias pollutes data sets. This is how we end up with global tech behemoths running into hot water because their algorithms don’t recognize black faces properly. Even now, with ML technology experts acutely aware of the bias issue, it can be challenging to collate a real-world dataset entirely free of bias. Even if a real-world dataset can account for all of the above challenges, which in reality is hard to imagine, data models need to be improved and tweaked constantly to stay unbiased and avoid degradation over time. That means a constant need for fresh data.


Improve Developer Experience to Prevent Burnout

It’s obvious that a poor developer experience creates a negative impact throughout an entire company. If developers aren’t producing good work due to unhappiness, illness or burnout, it’s likely that organizations aren’t staying at the cutting edge or offering competitive products in the market. A demoralized team can have a really negative business impact, and it can even change the way that people outside the company feel about it. An unhappy team isn’t going to lead to much creativity or productivity. As a way to combat this growing trend, companies are looking left and right for solutions. Some companies are reaching for things like extra PTO days, a full month off, better benefits, pay raises, and more fun work culture or relaxed dress codes. Those things are nice to have, and we’re certainly not speaking ill of any organization trying something new to help their employees. But at the end of the day, if the overwork and unrealistic expectations remain, the developer burnout will remain too.


Top skill-building resources and advice for CISOs

Ultimately, the hiring organisations will define what it needs in terms of cybersecurity to find the right person. In finance and insurance, for example, there will be specific rules that must be followed in different countries and cybersecurity leaders in such organisations may even be liable. In telecommunications, the skills required are likely to be more technical, whereas in government knowledge around governance and risk are top of the list. “For instance, a smaller organisation which is a greenfield site, or a large multinational where there is already an established security function require different sets of skills and approaches,” Joseph Head, director technical security at Intaso tells CSO. “There are a few commonalities between all CISO roles, however: an understanding of risk and risk appetite — in other words, an understanding of the business, and how much risk it can carry. This dictates how much work a CISO must do, and therefore available budget. Unlocking that budget can only be done by communicating effectively.”


Startup promises SD-WAN service with MPLS reliability, less complexity

Graphiant says that what makes its service different from SD-WAN offerings is how its Stateless Graphiant Core handles WAN data and control planes. The company says many large enterprises have been unwilling to give up the SLAs that come with MPLS for mission-critical traffic. Thus, SD-WAN augments the MPLS network for lower-priority traffic, and the network team must manage two different networks. The operational and administrative overhead of the combined solution, along with the complexity of overlays, tunnels, and policy management means that many enterprises are turning back to MPLS providers that offer their own SD-WAN or that resell others’. That way, the enterprises can order to relieve the burden of managing a complicated managed service themselves. “Enterprise networks have transitioned from predictable topologies to unpredictable ones,” Raza says. He argues that cloud services, IoT, work from home, and a range of other pressures have pushed the MPLS-plus-SD-WAN formula to its breaking point.


High-trust workplace meets no-trust network security

Clearly, the traditional model for IT security is no longer fit for this newly-dispersed world of work and a fresh model is needed — one where the unit of control is identity and where identity is the basis of a system of authorisation and authentication for every device, service and user on your network. Welcome to zero trust, a system which works on the assumption that identity needs to be authenticated and authorised. Given the shift to high-trust digital working environments and the surge in attacks, interest in zero trust is growing. According to Gartner, 40 percent of remote access will be conducted using a zero trust model by 2024 — up from five percent in 2020. Remote work is driving uptake, with zero trust seen as a fast way to achieve security and compliance, according to a Microsoft report on its adoption. Zero trust is implemented through consistent tools, workflows and processes delivered as a set of shared, centrally-managed and automated services. What does this look like? It means codifying policies and procedures for authorisation and access across the technology stacks, domains and service providers that comprise the IT infrastructure.


IT leadership: How to defeat burnout

What sets Liberty Mutual apart from other organizations is our purpose. We exist to help people embrace today and confidently pursue tomorrow. This is our North Star and helps define and guide everything we do. We also understand that combating burnout requires connecting work to outcome. To ensure that this happens, we spend time defining targeted outcomes – the realization of the expected benefit – versus output – for example, simply turning on a new feature in a system. Success is measured by producing results and realizing benefits. Outcome might be the ability to deploy capabilities faster than before, for example. The key word is ‘capabilities,’ which help us deliver better products and services to customers. An outcome is much bigger than an output such as simply turning on a technology. These nuances matter in the context of burnout. If you’re working on a project and you don’t know why you’re doing it or what the intended results are, you’re not connected to why it matters.



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown