Showing posts with label digital workforce. Show all posts
Showing posts with label digital workforce. Show all posts

Daily Tech Digest - April 13, 2026


Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


In her Forbes article, Jodie Cook examines the "vibe coding trap," a modern hazard for ambitious founders who leverage AI to build software at speeds that outpace their engineering teams. This newfound superpower allows non-technical leaders to generate products through natural language, yet it frequently results in a dangerous illusion of progress. The trap occurs when founders become so enamored with rapid execution that they neglect vital strategic priorities, such as sales and market positioning, while inadvertently creating technical debt and organizational friction. By diving into production themselves, founders risk undermining their specialists’ expertise and eroding trust within technical departments. To navigate this challenge, Cook advises founders to treat vibe coding as a tool for high-level communication and rapid prototyping rather than a replacement for professional development. Instead of getting bogged down in the minutiae of output, leaders must transition into "decision architects," focusing on judgment, vision, and accountability. By establishing disciplined boundaries between initial exploration and final execution, founders can harness AI's efficiency without compromising product scalability or team morale. Ultimately, the solution lies in slowing down to think clearly, ensuring that technical acceleration aligns with the company's long-term strategic objectives and cultural health.


Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot

In "Your developers are already running AI locally," VentureBeat explores the emergence of "Shadow AI 2.0," a trend where developers bypass cloud-based AI in favor of local, on-device inference. Driven by powerful consumer hardware and sophisticated quantization techniques, this "Bring Your Own Model" (BYOM) movement allows engineers to run complex Large Language Models directly on laptops. While this offers privacy and speed, it creates a significant "blind spot" for Chief Information Security Officers (CISOs). Traditional Data Loss Prevention (DLP) tools, which typically monitor cloud-bound traffic, are unable to detect these offline interactions. This shift relocates the primary enterprise risk from data exfiltration to issues of integrity, provenance, and compliance. Specifically, unvetted models can introduce security vulnerabilities through "contaminated" code or malicious payloads hidden within older model file formats like Pickle-based PyTorch files. To mitigate these risks, the article suggests that organizations must treat model weights as critical software artifacts rather than mere data. This involves establishing governed internal model hubs, implementing robust endpoint monitoring, and ensuring that corporate security frameworks adapt to a landscape where the perimeter has effectively shifted back to the device, requiring a comprehensive Software Bill of Materials (SBOM) to manage all local AI models effectively.

The article explores the critical integration of financial management into engineering workflows, treating cloud costs not as a back-office accounting task but as a real-time telemetry signal comparable to latency or uptime. Traditionally, a broken feedback loop exists where engineers prioritize performance while finance monitors quarterly bills, often leading to expensive surprises like scaling anomalies caused by inefficient code. By adopting FinOps, developers embrace "cost as a runtime signal," enabling them to observe the immediate financial impact of their architectural decisions. This approach centers on unit economics—such as the marginal cost per API call or database query—transforming abstract billing data into visceral, actionable insights. The author emphasizes that cloud infrastructure often obscures its own economics, making it easy to overspend without immediate awareness. Ultimately, shifting cost-consciousness "left" into the development lifecycle allows teams to build more efficient systems, ensuring that auto-scaling and resource allocation are driven by value rather than waste. This cultural transformation empowers engineers to treat financial efficiency as a core engineering discipline, bridging the gap between technical execution and business value to optimize the overall health and sustainability of cloud-native environments.


The Tool That Predates Every Privacy Law — and May Just Outlive Them All

Devika Subbaiah’s article explores the enduring legacy of the HTTP cookie, a foundational technology created by Lou Montulli in 1994 to solve the web’s "state" problem. Initially designed to help websites remember users, cookies have evolved from a simple functional tool into a controversial mechanism for mass surveillance and targeted advertising. This shift triggered a global wave of regulation, resulting in the pervasive cookie banners mandated by the GDPR and CCPA. However, as the digital landscape shifts toward a privacy-first era, major players like Google are phasing out third-party cookies in favor of new tracking frameworks like the Privacy Sandbox. Despite these systemic changes and the legal scrutiny surrounding data harvesting, the article argues that the cookie’s fundamental utility ensures its survival. While third-party tracking faces an uncertain future, first-party cookies remain the essential backbone of the modern internet, enabling everything from persistent logins to shopping carts. Ultimately, the cookie predates our current legal frameworks and will likely outlive them because the internet as we know it cannot function without the basic ability to remember user interactions across sessions. It remains a resilient piece of digital infrastructure that continues to define our online experience even as privacy norms undergo radical transformation.


The AI information gap and the CIO’s mandate for transparency

In the 2026 B2B landscape, the initial excitement surrounding artificial intelligence has shifted toward a healthy skepticism, creating a significant "information gap" that vendors must bridge to maintain client trust. According to Bryan Wise, modern CIOs are now tasked with a critical mandate for transparency, as buyers increasingly prioritize data integrity and governance over mere performance hype. Recent industry reports indicate that over half of B2B buyers engage sales teams earlier than in previous years due to implementation uncertainties, frequently raising sharp questions about training datasets, privacy protocols, and security guardrails. To overcome these trust-based obstacles, CIOs must serve as the central hub for cross-functional transparency initiatives. This proactive strategy involves creating comprehensive "AI dossiers" that document model functionality and training sources, while simultaneously arming sales and support teams with detailed technical documentation. By aligning marketing messaging with legal compliance and providing tangible evidence of ethical AI usage, organizations can transform transparency into a distinct competitive advantage. Ultimately, the modern CIO's role has expanded beyond technical oversight to include being the custodian of organizational truth, ensuring that AI narratives across all customer-facing channels remain consistent, verifiable, and grounded in accountability to prevent complex deals from stalling during the due diligence phase.


Why Codefinger represents a new stage in the evolution of ransomware

The Codefinger ransomware attack marks a significant evolution in cyber threats by shifting the focus from malicious code to credential exploitation. Discovered in early 2025, this breach specifically targeted Amazon S3 storage keys that were poorly managed by developers and stored in insecure locations. Unlike traditional ransomware that relies on planting malware to encrypt files, Codefinger hijackers simply utilized stolen access credentials to encrypt cloud-based data. This transition highlights critical vulnerabilities in the cloud’s shared responsibility model, where users are responsible for securing their own access keys rather than the provider. Furthermore, the attack exposes the limitations of conventional backup strategies; if encrypted data is automatically backed up, the recovery points become useless. To combat such sophisticated threats, organizations must move beyond basic defenses and implement robust secrets management, including systematic identification, periodic cycling, and granular access controls. Codefinger serves as a stark reminder that as ransomware tactics evolve, businesses must proactively map their attack vectors and prioritize secure configuration of cloud resources. Relying solely on off-site backups is no longer sufficient in an era where attackers directly manipulate administrative permissions to hold vital corporate data hostage.


Software Engineering 3.0: The Age of the Intent-Driven Developer

Software Engineering 3.0 marks a paradigm shift where the fundamental unit of programming transitions from technical syntax to human intent. While the first era focused on craftsmanship and manual machine translation, and the second on abstraction through frameworks, the third era utilizes artificial intelligence to absorb the heavy lifting of code generation. In this new landscape, developers act less like manual laborers and more like architects or curators who orchestrate complex systems. The article emphasizes that intent-driven development requires a unique set of skills: the ability to write precise specifications, critically evaluate AI-generated outputs for subtle errors, and use testing as a primary method for documenting intent. Rather than replacing the engineer, these tools elevate the profession, allowing practitioners to solve higher-level problems while automating boilerplate tasks. Success in SE 3.0 depends on clear thinking and rigorous judgment rather than just typing speed or syntax memorization. Ultimately, this "antigravity" moment in software development narrows the gap between imagination and implementation, transforming the developer into a high-level conductor who manages probabilistic components and complex orchestration to create resilient systems. This evolution reflects a broader historical trend where each layer of abstraction empowers engineers to build more ambitious technology.


Artificial intelligence, specifically Large Language Models, currently operates on a foundation of mathematical probability rather than objective truth, making it fundamentally untrustworthy in its present state. As explored in Kevin Townsend’s analysis, AI is plagued by persistent issues including hallucinations, inherent biases, and a tendency toward sycophancy, where models mirror user expectations rather than providing factual accuracy. Furthermore, the phenomenon of model collapse suggests an inevitable systemic decay—akin to the second law of thermodynamics—whereby AI-generated data pollutes future training sets, compounding errors over generations. Despite these significant risks and the lack of a verifiable ground truth, the rapid pace of modern business and the demand for immediate return on investment are driving enterprises to deploy these technologies prematurely. We find ourselves in a paradoxical situation where, although we cannot safely trust AI today, the competitive necessity and overwhelming promise of the technology mean that society must eventually find a way to do so. Achieving this transition requires a deep understanding of AI’s limitations, a focus on securing systems against adversarial abuse, and a shift from viewing AI as a fact-based database to recognizing its probabilistic, token-based nature. Ultimately, while current systems are built on sand, the trajectory of innovation makes reliance inevitable.


The business mobility trends driving workforce performance in 2026

The article outlines the pivotal business mobility trends set to redefine workforce performance and productivity by 2026, emphasizing the shift toward integrated, secure, and efficient digital ecosystems. A primary driver is zero-touch device enrollment, which streamlines the large-scale deployment of pre-configured hardware, effectively eliminating traditional IT bottlenecks. Complementing this is the transition to Zero Trust security architectures, which replace implicit trust with continuous verification to protect distributed workforces from escalating cyber threats. Furthermore, the integration of unified cloud and connectivity services through single-vendor partnerships is highlighted as a critical method for reducing operational complexity and enhancing business resilience. This holistic approach extends to comprehensive end-to-end device lifecycle management, which leverages standardisation and refurbishment to achieve long-term cost-efficiency and support environmental sustainability goals. Ultimately, the article argues that navigating the complexities of hybrid work and rapid innovation requires a coherent mobility strategy managed by a single experienced partner. By consolidating these technological pillars, ranging from initial provisioning to secure retirement, organizations can ensure consistent security postures and allow internal teams to focus on high-value initiatives rather than day-to-day operational tasks. This strategic alignment is essential for maintaining a competitive edge in an increasingly mobile-first global landscape.


Fixing vulnerability data quality requires fixing the architecture first

Art Manion, Deputy Director at Tharros, argues that resolving the persistent issues within vulnerability data quality necessitates a fundamental overhaul of underlying architectures rather than just refining the data itself. In this interview, Manion explains that current repositories often suffer from inconsistency and a lack of trust because they were not designed with effective collection and management in mind. A central concept discussed is Minimum Viable Vulnerability Enumeration (MVVE), which represents the necessary assertions to deduplicate vulnerabilities across different systems. Interestingly, research suggests that no static "minimum" exists; instead, assertions must remain variable and evolve alongside our understanding of threats. Manion proposes that vulnerability records should be viewed as collections of independently verifiable, machine-usable assertions that prioritize provenance and transparency. He further critiques the security community's over-reliance on metrics like CVSS scores, which often distort perceptions and distract from the critical task of assessing actual risk within a specific context. Ultimately, the proposal suggests that before the industry develops new tools or specifications, it must establish a solid foundation of shared terms and principles. By addressing architectural flaws and accepting that information will naturally be incomplete, organizations can build more resilient, trustworthy systems for managing global vulnerability information.

Daily Tech Digest - August 11, 2025


Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


Attackers Target the Foundations of Crypto: Smart Contracts

Central to the attack is a malicious smart contract, written in the Solidity programming language, with obfuscated functionality that transfers stolen funds to a hidden externally owned account (EOA), says Alex Delamotte, the senior threat researcher with SentinelOne who wrote the analysis. ... The decentralized finance (DeFi) ecosystem relies on smart contracts — as well as other technologies such as blockchains, oracles, and key management — to execute transactions, manage data on a blockchain, and allow for agreements between different parties and intermediaries. Yet their linchpin status also makes smart contracts a focus of attacks and a key component of fraud. "A single vulnerability in a smart contract can result in the irreversible loss of funds or assets," Shashank says. "In the DeFi space, even minor mistakes can have catastrophic financial consequences. However, the danger doesn’t stop at monetary losses — reputational damage can be equally, if not more, damaging." ... Companies should take stock of all smart contracts by maintaining a detailed and up-to-date record of all deployed smart contracts, verifying every contract, and conducting periodic audits. Real-time monitoring of smart contracts and transactions can detect anomalies and provide fast response to any potential attack, says CredShields' Shashank.


Is AI the end of IT as we know it?

CIOs have always been challenged by the time, skills, and complexities involved in running IT operations. Cloud computing, low-code development platforms, and many DevOps practices helped IT teams move “up stack,” away from the ones and zeros, to higher-level tasks. Now the question is whether AI will free CIOs and IT to focus more on where AI can deliver business value, instead of developing and supporting the underlying technologies. ... Joe Puglisi, growth strategist and fractional CIO at 10xnewco, offered this pragmatic advice: “I think back to the days when you wrote an assembly and it took a lot of time. We introduced compilers, higher-level languages, and now we have AI that can write code. This is a natural progression of capabilities and not the end of programming.” The paradigm shift suggests CIOs will have to revisit their software development lifecycles for significant shifts in skills, practices, and tools. “AI won’t replace agile or DevOps — it’ll supercharge them with standups becoming data-driven, CI/CD pipelines self-optimizing, and QA leaning on AI for test creation and coverage,” says Dominik Angerer, CEO of Storyblok. “Developers shift from coding to curating, business users will describe ideas in natural language, and AI will build functional prototypes instantly. This democratization of development brings more voices into the software process while pushing IT to focus on oversight, scalability, and compliance.”


From Indicators to Insights: Automating Risk Amplification to Strengthen Security Posture

Security analysts don’t want more alerts. They want more relevant ones. Traditional SIEMs generate events using their own internal language that involve things like MITRE tags, rule names and severity scores. But what frontline responders really want to know is which users, systems, or cloud resources are most at risk right now. That’s why contextual risk modeling matters. Instead of alerting on abstract events, modern detection should aggregate risk around assets including users, endpoints, APIs, or services. This shifts the SOC conversation from “What alert fired?” to “Which assets should I care about today?” ... The burden of alert fatigue isn’t just operational but also emotional. Analysts spend hours chasing shadows, pivoting across tools, chasing one-off indicators that lead nowhere. When everything is an anomaly, nothing is actionable. Risk amplification offers a way to reduce the unseen yet heavy weight on security analysts and the emotional toll it can take by aligning high-risk signals to high-value assets and surfacing insights only when multiple forms of evidence converge. Rather than relying on a single failed login or endpoint alert, analysts can correlate chains of activity whether they be login anomalies, suspicious API queries, lateral movement, or outbound data flows – all of which together paint a much stronger picture of risk.


The Immune System of Software: Can Biology Illuminate Testing?

In software engineering, quality assurance is often framed as identifying bugs, validating outputs, and confirming expected behaviour. But similar to immunology, software testing is much more than verification. It is the process of defining the boundaries of the system, training it to resist failure, and learning from its past weaknesses. Like the immune system, software testing should be multi-layered, adaptive, and capable of evolving over time. ... Just as innate immunity is present from biological birth, unit tests should be present from the birth of our code. Just as innate immunity doesn't need a full diagnostic history to act, unit tests don’t require a full system context. They work in isolation, making them highly efficient. But they also have limits: they can't catch integration issues or logic bugs that emerge from component interactions. That role belongs to more evolved layers. ... Negative testing isn’t about proving what a system can do — it’s about ensuring the system doesn’t do what it must never do. It verifies how the software behaves when exposed to invalid input, unauthorized access, or unexpected data structures. It asks: Does the system fail gracefully? Does it reject the bad while still functioning with the good? Just as an autoimmune disease results from a misrecognition of the self, software bugs often arise when we misrecognise what our code should do and what it should not do.


CSO hiring on the rise: How to land a top security exec role

“Boards want leaders who can manage risk and reputation, which has made soft skills — such as media handling, crisis communication, and board or financial fluency — nearly as critical as technical depth,” Breckenridge explains. ... “Organizations are seeking cybersecurity leaders who combine technical depth, AI fluency, and strong interpersonal skills,” Fuller says. “AI literacy is now a baseline expectation, as CISOs must understand how to defend against AI-driven threats and manage governance frameworks.” ... Offers of top pay and authority to CSO candidates obviously come with high expectations. Organizations are looking for CSOs with a strong blend of technical expertise, business acumen, and interpersonal strength, Fuller says. Key skills include cloud security, identity and access management (IAM), AI governance, and incident response planning. Beyond technical skills, “power skills” such as communication, creativity, and problem-solving are increasingly valued, Fuller explains. “The ability to translate complex risks into business language and influence board-level decisions is a major differentiator. Traits such as resilience, adaptability, and ethical leadership are essential — not only for managing crises but also for building trust and fostering a culture of security across the enterprise,” he says.


From legacy to SaaS: Why complexity is the enemy of enterprise security

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc. Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems. ... The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.


Why Metadata Is the New Interface Between IT and AI

A looming risk in enterprise AI today is using the wrong data or proprietary data in AI data pipelines. This may include feeding internal drafts to a public chatbot, training models on outdated or duplicate data, or using sensitive files containing employee, customer, financial or IP data. The implications range from wasted resources to data breaches and reputational damage. A comprehensive metadata management strategy for unstructured data can mitigate these risks by acting as a gatekeeper for AI workflows. For example, if a company wants to train a model to answer customer questions in a chatbot, metadata can be used to exclude internal files, non-final versions, or documents marked as confidential. Only the vetted, tagged, and appropriate content is passed through for embedding and inference. This is a more intelligent, nuanced approach than simply dumping all available files into an AI pipeline. With rich metadata in place, organizations can filter, sort, and segment data based on business requirements, project scope, or risk level. Metadata augments vector labeling for AI inferencing. A metadata management system helps users discover which files to feed the AI tool, such as health benefits documents in an HR chatbot while vector labeling gives deeper information as to what’s in each document.


Ask a Data Ethicist: What Should You Know About De-Identifying Data?

Simply put, data de-identification is removing or obscuring details from a dataset in order to preserve privacy. We can think about de-identification as existing on a continuum... Pseudonymization is the application of different techniques to obscure the information, but allows it to be accessed when another piece of information (key) is applied. In the above example, the identity number might unlock the full details – Joe Blogs of 123 Meadow Drive, Moab UT. Pseudonymization retains the utility of the data while affording a certain level of privacy. It should be noted that while the terms anonymize or anonymization are widely used – including in regulations – some feel it is not really possible to fully anonymize data, as there is always a non-zero chance of reidentification. Yet, taking reasonable steps on the de-identification continuum is an important part of compliance with requirements that call for the protection of personal data. There are many different articles and resources that discuss a wide variety of types of de-identification techniques and the merits of various approaches ranging from simple masking techniques to more sophisticated types of encryption. The objective is to strike a balance between the complexity of the the technique to ensure sufficient protection, while not being burdensome to implement and maintain.


5 ways business leaders can transform workplace culture - and it starts by listening

Antony Hausdoerfer, group CIO at auto breakdown specialist The AA, said effective leaders recognize that other people will challenge established ways of working. Hearing these opinions comes with an open management approach. "You need to ensure that you're humble in listening, but then able to make decisions, commit, and act," he said. "Effective listening is about managing with humility with commitment, and that's something we've been very focused on recently." Hausdoerfer told ZDNET how that process works in his IT organization. "I don't know the answer to everything," he said. "In fact, I don't know the answer to many things, but my team does, and by listening to them, we'll probably get the best outcome. Then we commit to act." ... Bev White, CEO at technology and talent solutions provider Nash Squared, said open ears are a key attribute for successful executives. "There are times to speak and times to listen -- good leaders recognize which is which," she said. "The more you listen, the more you will understand how people are really thinking and feeling -- and with so many great people in any business, you're also sure to pick up new information, deepen your understanding of certain issues, and gain key insights you need."


Beyond Efficiency: AI's role in reshaping work and reimagining impact

The workplace of the future is not about humans versus machines; it's about humans working alongside machines. AI's real value lies in augmentation: enabling people to do more, do better, and do what truly matters. Take recruitment, for example. Traditionally time-intensive and often vulnerable to unconscious bias, hiring is being reimagined through AI. Today, organisations can deploy AI to analyse vast talent pools, match skills to roles with precision, and screen candidates based on objective data. This not only reduces time-to-hire but also supports inclusive hiring practices by mitigating biases in decision-making. In fact, across the employee lifecycle, it personalises experiences at scale. From career development tools that recommend roles and learning paths aligned with individual aspirations, to chatbots that provide real-time HR support, AI makes the employee journey more intuitive, proactive, and empowering. ... AI is not without its challenges. As with any transformative technology, its success hinges on responsible deployment. This includes robust governance, transparency, and a commitment to fairness and inclusion. Diversity must be built into the AI lifecycle, from the data it's trained on to the algorithms that guide its decisions. 

Daily Tech Digest - February 16, 2025


Quote for the day:

"Leaders should influence others in such a way that it builds people up, encourages and edifies them so they can duplicate this attitude in others." -- Bob Goshen


A look under the hood of transfomers, the engine driving AI model evolution

Depending on the application, a transformer model follows an encoder-decoder architecture. The encoder component learns a vector representation of data that can then be used for downstream tasks like classification and sentiment analysis. The decoder component takes a vector or latent representation of the text or image and uses it to generate new text, making it useful for tasks like sentence completion and summarization. For this reason, many familiar state-of-the-art models, such the GPT family, are decoder only. Encoder-decoder models combine both components, making them useful for translation and other sequence-to-sequence tasks. For both encoder and decoder architectures, the core component is the attention layer, as this is what allows a model to retain context from words that appear much earlier in the text. ... Currently, transformers are the dominant architecture for many use cases that require LLMs and benefit from the most research and development. Although this does not seem likely to change anytime soon, one different class of model that has gained interest recently is state-space models (SSMs) such as Mamba. This highly efficient algorithm can handle very long sequences of data, whereas transformers are limited by a context window.


McKinsey On Return To Office: Leaders Are Focused On The Wrong Thing

Unsurprisingly, older employees report higher satisfaction with on-site work than their younger colleagues. Nevertheless, employees across all work models report similar satisfaction levels, which debunks the belief that bringing people back in person automatically enhances engagement or retention. Worse still, leaders consistently overestimate their organizations’ maturity regarding the very factors used to justify returning to the office. ... The balance of power may have shifted back to bosses, but, as Voltaire said first and Spider-Man famously learns from Uncle Ben, “with great power comes great responsibility.” No matter what workplace model a given employee finds themselves in today, the past few years likely opened their eyes to the power of choice and flexibility and the chasm between modern hospitality and retail-oriented experiences and the vibrancy and community in a traditional office. ... So employees believe they are doing the work, and they may accept that flexibility is a reward for objectively high performance. If executives believe the purpose of the office is to accelerate innovation, connectivity, and mentoring, they are on the hook to ensure it does. Leaders must model new behaviors, invest in workplace experience, and learn to measure outcomes without a bias for presence. Employees may quit as soon as the power pendulum swings back.


8 tips for being a more decisive leader

“Clarity is what is expected from a leader,” says Malhotra. “Clarity of vision, clarity in strategy, clarity of plan, clarity in the process, and clarity in how to measure success.” Showing up with an answer is not as important to the decision as bringing clarity to the process. “As a leader, you’re the force multiplier for your organization,” he says. “Force multiplying is a vector quantity, not a scalar quantity. It’s a vector quantity because the direction is very important. It’s not just the magnitude. It’s the direction, too. So being a force multiplier requires that you are clear when it comes to the end state you are trying to achieve.” ... “There are two things you have to consider: the urgency and the importance of the decision,” says Efrain Ruh, field CTO for Continental Europe at Digitate. If something is complex and important, take your time and gather as much information as possible. But if it is a decision that is easy to come back from, he says, “I try not to go too deep.” “There are ‘single-door decisions’ and ‘double-door decisions,’” agrees Malhotra. When it’s a single-door decision, you can never come back through that door after you have walked through it. ... When you step into a leadership role, you begin to see everything from a high-level strategy point of view. But your decisions will often affect people with their boots on the ground.


Can English Dethrone Python as Top Programming Language?

IDC predicts that by 2028, natural language will become the most widely used programming language, with developers using it to create 70% of net-new digital solutions. (Source: IDC FutureScape: Worldwide Developer and DevOps 2025 Predictions) “I actually think that the best phrasing of this prediction would be to replace ‘natural language’ with ‘English’ because of the dominance of English as a spoken and written language worldwide,” Dayaratna said. Moreover, he said he believes that in four to five years, developers will increasingly go to a chatbot-like interface and use natural language to produce digital solutions. Meanwhile, code will be used to innovate on the technology substrate that enables this kind of technology. “In other words, we’re not far from a world that witnesses the demise of commercial off-the-shelf software simply because it will be so easy to create such software, in a custom way, for an organization’s business processes,” Dayaratna said. Hence, he explained that we are seeing the emergence of what Amjad Masad, CEO of Replit, called the era of “personal software.” “Just as the Mac inaugurated personal computing in 1984, generative AI has initiated the era of ‘personal software’ that recognizes the specificity of individual and organizational preferences,” Dayaratna said.


What is anomaly detection? Behavior-based analysis for cyber threats

“Anomaly detection is the holy grail of cyber detection where, if you do it right, you don’t need to know a priori the bad thing that you’re looking for,” Bruce Potter, CEO and founder of Turngate, tells CSO. “It’ll just show up because it doesn’t look like anything else or doesn’t look like it’s supposed to. People have been tilting at that windmill for a long time, since the 1980s, trying to figure out what normal is so they can look for deviations from it to find all the bad things happening in their enterprises.” ... Although predicated on advanced math concepts, anomaly detection, or as the NIST Cybersecurity Framework 2.0 calls it, “adverse event analysis,” has over the past two decades been incorporated into a wide range of cybersecurity tools, including endpoint detection and response (EDR), firewall, and security information and event management (SIEM) tools. “In general, you can split the detection universe into two halves,” Potter says. “One is finding known bads, and then one is finding things that might be bad. Known bads are typically like a signature base where I know very specifically if I see this file or this exact thing happened on the system, it’s bad.” Known bads are typically flagged by fundamental cybersecurity tools.


Open Source AI Models: Perfect Storm for Malicious Code, Vulnerabilities

Executable data files are not the only threats, however. Licensing is another issue: While pretrained AI models are frequently called "open source AI," they generally do not provide all the information needed to reproduce the AI model, such as code and training data. Instead, they provide the weights generated by the training and are covered by licenses that are not always open source compatible. Creating commercial products or services from such models can potentially result in violating the licenses, says Andrew Stiefel, a senior product manager at Endor Labs. "There's a lot of complexity in the licenses for models," he says. "You have the actual model binary itself, the weights, the training data, all of those could have different licenses, and you need to understand what that means for your business." Model alignment — how well its output aligns with the developers' and users' values — is the final wildcard. DeepSeek, for example, allows users to create malware and viruses, researchers found. Other models — such as OpenAI's o3-mini model, which boasts more stringent alignment — has already been jail broken by researchers. These problems are unique to AI systems and the boundaries of how to test for such weaknesses remains a fertile field for researchers, says ReversingLabs' Pericin.


Risk Matters: Cyber Risk and AI – The Changing Landscape

Although AI assists organizations defend against cyber-attacks, it is a double-edged sword. More to the point, AI is also providing cyber attackers with an array of cost-efficient techniques that facilitate their cyber-attacks. Sophisticated AI-generated phishing attacks, social engineering attacks, and ransomware attacks are just a few of the ways AI has made the cyber-attack landscape more lethal. AI-generated models used by cyber attackers and cyber defenders have been evolving at a rapid pace. As a result, the strategic interactions between cyber attackers and cyber defenders have become more automated, more dynamic, more adaptive, and more complex. These developments have increased, and substantially changed, the game-theoretic aspects associated with cyber risk. ... Besides considering the total amount to spend on cybersecurity-related activities, a subsidiary question for organizations to answer is: How much of our organization’s cybersecurity-related budget should be devoted to developing and implementing AI models designed to reduce the likelihood of a cyber incident? In answering this subsidiary question, organizations need to consider the costs associated with the AI models.


Juniper CEO: ‘I am disappointed and somewhat puzzled’ by DOJ merger rejection

“They’re taking such a narrow view of the total transaction, which is the wireless line segment, a relatively small part of Juniper’s business, a small part of HPE’s business. And even if you do take a look at the wireless segment, you know we’re talking about a very competitive area with eight or nine different competitors. It’s unfortunate that we’re in the situation that we’re in, but that said, that’s okay. We’re prepared to take it to court and to prove our case and ultimately, hopefully, prevail,” Rahim said. HPE and Juniper met with the DOJ several times to go over the purchase, but the companies had no inclination the DOJ would go the direction it did—certainly with regards to its focus on the wireless market, Rahim said. The DOJ issued a Complaint “that ignores the reality that HPE and Juniper are two of at least ten competitors with comparable offerings and capabilities fighting to win customers every day,” the companies wrote. “A Complaint whose description of competitive dynamics in the wireless local area networking (WLAN) space is divorced from reality; and a Complaint that contradicts the conclusions reached by antitrust regulators around the world that have unconditionally cleared the transaction.”


The Benefits of the M&A Frenzy in Fraud Solutions

With businesses looking to reduce the number of vendors they work with to lower integration costs, David Mattei, strategic advisor at Datos Insights, expects "a higher momentum of M&A activities in 2025 as vendors race to grow." "Single-solution vendors have a harder time competing in today's world," and small to medium-sized single solution vendors "are likely to be acquired," Mattei said. LexisNexis' acquisition of IDVerse in December 2024 is an example of this this trend. ... Fraud executives agree that the most pragmatic approach today is proactive communication and awareness campaigns, and the data supports their effectiveness. However, the most anticipated and potentially effective solution is consortia-based fraud detection, combining risk signals from both sending and receiving financial institutions, Fooshee told Information Security Media Group. The challenge lies in overcoming resistance to information sharing - from fraud teams, compliance, legal and regulators - because of concerns over data integrity, integration complexities and privacy restrictions. Interestingly, markets most affected by scams and with simpler regulatory landscapes are finding ways to navigate these barriers more effectively.


Apple’s emotional lamp and the future of robots

It’s clear that Apple’s lamp is programmed to move in a way that deludes users into believing that the it has internal states that it doesn’t actually have. ... Apple’s lamp research definitely sheds light on where our interaction with robots may be heading—a new category of appliance that might well be called the “emotional robot.” A key component of the research was a user study comparing how people perceived a robot using functional and expressive movements versus one that uses only functional movements. ... The biggest takeaway from Apple’s ELEGNT research is likely that neither a human-like voice nor a human-like body, head, or face is required for a robot to successfully trick a human into relating to it as a sentient being with internal thoughts, feelings, and emotions. ELEGNT is not a prototype product; it is instead a lab and social experiment. But that doesn’t mean a product based on this research will not soon be available on a desktop near you. ... Apple is developing a desktop robot project, codenamed J595, and is targeting a launch within two years. According to reports based on leaks, the robot might look a little like Apple’s iMac G4, which was a lamp-like form factor featuring a screen at the end of a moveable “arm.” 


Daily Tech Digest - February 10, 2025


Quote for the day:

"If it wasn't hard, everyone would do it, the hard is what makes it great." -- Tom Hanks


Privacy Puzzle: Are Businesses Ready for the DPDP Act?

The State of Data Privacy in India 2024 report shows mixed responses. While 56% of businesses think the DPDP Act addresses key privacy issues, 30% are unsure and 14% remain skeptical. Even more troubling, more than 82% of companies lack transparency in handling data, raising serious trust concerns. ... smaller businesses, such as micro, small and medium enterprises, or MSMEs, and startups, often struggle due to limited resources. Many rely on IT or legal teams to oversee privacy initiatives, with some lacking any formal governance structures. This fragmented approach poses significant risks, especially as these organizations are equally subject to regulatory scrutiny under the DPDP Act. ... Third-party risk is another critical concern. Many enterprises depend on vendors for essential services, yet only 38% use a combination of risk assessments and contractual obligations to manage third-party privacy risks. Eight percent of organizations lack any significant measures, leaving them exposed to potential data leaks and regulatory penalties. ... Despite progress made in privacy staffing and strategy alignment, privacy professionals are experiencing increased stress within a complex compliance and risk landscape, according to new research from ISACA.


CISOs: Stop trying to do the lawyer’s job

“It’s good to be mindful in advance of the security and privacy requirements in the jurisdictions the organization is operating within, and to prepare possible responses should there be incidents that violate those laws and how to respond to those,” says Christine Bejerasco, CISO at WithSecure. Of course, the conversation between the two parties can go smoothly if there’s an existing relationship. If not, that relationship should be built. “Reaching out to legal experts should be as straightforward as reaching out to another colleague,” Bejerasco adds. “Just talk to them directly.” ... Some CISO have a legal background of have an extensive amount of experience working with general counsel. However, this does not mean they should act as legal advisors or take on responsibilities outside their role. “It is important to respect boundaries and not overstep job functions,” says Stacey Cameron, CISO at Halcyon. “There’s nothing wrong with differing opinions, interpretations, or healthy discussions, but for legal matters, it will be the lawyers’ responsibility to make a case on behalf of the company, so we need to respect each other’s roles and stay in our respective lanes.” According to Cameron, overstepping boundaries is one of the biggest mistakes CISOs can make, when they are trying to build a relationship with their organizations’s lawyers. 


Inside Monday’s AI pivot: Building digital workforces through modular AI

The initial deployment of gen AI at Monday didn’t quite generate the return on investment users wanted, however. That realization led to a bit of a rethink and pivot as the company looked to give its users AI-powered tools that actually help to improve enterprise workflows. That pivot has now manifested itself with the company’s “AI blocks” technology and the preview of its agentic AI technology that it calls “digital workforce.” Monday’s AI journey, for the most part, is all about realizing the company’s founding vision. “We wanted to do two things, one is give people the power we had as developers,” Mann told VentureBeat in an exclusive interview. “So they can build whatever they want, and they feel the power that we feel, and the other end is to build something they really love.” ... Simply put, AI functionality needs to be in the right context for users — directly in a column, component or service automation. AI blocks are pre-built AI functions that Monday has made accessible and integrated directly into its workflow and automation tools. For example, in project management, the AI can provide risk mapping and predictability analysis, helping users better manage their projects. 


Courting Global Talent: How can Web3 Startups Attract the Best Developers in the World?

Any company without concrete values guiding its recruitment will often hire quickly and in the end obtain regrettable results. Web3 projects are no exception. Fortunately, there are a number of pre-established values in Web3 that can help offset this tendency: community, inclusivity, sustainability, and collaboration. These beliefs should be the guiding frameworks behind any Web3 startup's hiring policy, enabling them to assess candidates with a clear understanding of whether the applicant's character aligns with the company's DNA. High-performing people are needed in Web3 who can not only bring their own unique experiences to an organisation, but whose broader values very much align with the company's guiding principles. The focus of any hiring strategy should never be quantity over quality, as this will almost always result in disappointment and wasted time. Hiring people who are the right fit - measured by how well the candidate exemplifies the company's overarching values - should be non-negotiable. Likewise, transparency, another of Web3's core tenets, should be baked into every step of the hiring funnel, and it comes in two modes. Firstly, Web3 companies should be aware of their unique value proposition and amplify this in their external marketing efforts.


Is DOGE a cybersecurity threat? A security expert explains the dangers of violating protocols and regulations

Traditionally, the purpose of cybersecurity is to ensure the confidentiality and integrity of information and information systems while helping keep those systems available to those who need them. But in DOGE's first few weeks of existence, reports indicate that its staff appears to be ignoring those principles and potentially making the federal government more vulnerable to cyber incidents. ... Currently, the general public, federal agencies and Congress have little idea who is tinkering with the government's critical systems. DOGE's hiring process, including how it screens applicants for technical, operational or cybersecurity competency, as well as experience in government, is opaque. And journalists investigating the backgrounds of DOGE employees have been intimidated by the acting U.S. attorney in Washington. DOGE has hired young people fresh out of—or still in—college or with little or no experience in government, but who reportedly have strong technical prowess. But some have questionable backgrounds for such sensitive work. And one leading DOGE staffer working at the Treasury Department has since resigned over a series of racist social media posts. ... DOGE operatives are quickly developing and deploying major software changes to very complex old systems and databases, according to reports. 


Australian businesses urged to help shape new data security framework

With the consultation process entering its final stages, businesses are encouraged to take part in upcoming workshops or submit feedback online. Workshops will take place in Sydney on Tuesday 18 February, Brisbane on Wednesday 19 February, and Melbourne on Wednesday 26 February. For those unable to attend, an online survey is available for businesses to provide their insights. Key emphasised the significance of business participation in shaping the framework. "This is the last chance to get involved in the industry consultation," he said. "Workshops are taking place this month, but if people can't attend, we'd love them to complete the survey online." The workshops will be interactive, allowing participants to share their experiences with data security, discuss their existing frameworks, and provide recommendations. ... Without meaningful industry engagement, the framework risks being ineffective or underutilised. Key warned that failing to gather input from businesses could lead to a framework that does not meet their needs. "We essentially would be creating an industry framework that industry may or may not actually utilise," he said. "This is really designed for industry, and we need that kind of input from industry for it to work for them."


Can AI Early Warning Systems Reboot the Threat Intel Industry?

AI platforms learn how multiple campaigns connect, which malicious tools get repeated, and how often threat actors pivot to new malicious infrastructure and domains. That kind of cross-campaign insight is gold for defenders, especially when the data is available in real time. Of course, adversaries won’t line up to feed their best secrets to OpenAI, Microsoft or Google AI platforms. Some hacker groups prefer open-source models, hosting them on private servers where there’s zero chance of being monitored. As these open-source models gain sophistication, criminals can test or refine their attacks without Big Tech breathing down their necks but the lure of advanced online models with powerful capabilities will be hard to avoid. Even as security experts remain bullish on the power of AI to save threat intel, there are adversarial concerns at play. Some warn that attackers can poison AI systems, manipulate data to produce false negatives, or exploit generative models for their own malicious scripts. But as it stands, the big AI platforms already see more malicious signals in a day than any single cybersecurity vendor sees in a year. That scale is exactly what’s been missing from threat intelligence. For all the talk about “community sharing” and open exchanges, it’s always been a tangled mess. 


Security validation: The new standard for cyber resilience

Stolen credentials are a goldmine for attackers. According to Verizon’s 2024 Data Breach Investigations Report (DBIR), compromised credentials account for 31% of breaches over the past decade and 77% of web application attacks. The Colonial Pipeline attack in 2021 is a stark reminder of the damage that can result from leaked credentials—attackers gained access to the company’s VPN using credentials found on the dark web. Security validation makes it easy to test for credential-related risks. ... One of the most significant benefits of security validation is its ability to provide evidence-based guidance for remediation. Rather than adopting a “patch everything” approach, teams can focus on the most critical fixes based on real exploitability risk and system impact. ... Traditional security metrics, such as the number of vulnerabilities patched or the percentage of endpoints with antivirus software, only tell part of the story. Security validation offers a fresh perspective by measuring your posture based on emulated attacks. This shift from reactive to proactive security management is essential in today’s ever-changing threat landscape. By safely emulating real-world attacks in live environments, security validation ensures that your controls can detect, block, and respond to threats before damage occurs.


Cyber insurance is no silver bullet for cybersecurity

Cyber insurance is designed to minimise organisations’ financial losses from cyber incidents by covering costs like breach notification, data restoration, legal fees, and even ransomware payments. Insurers evaluate an organisation’s security posture by assessing the implementation of specific security controls. ... Despite its potential, research reveals that cyber insurance falls short in improving security practices. A report by the Royal United Services Institute (RUSI) think tank points out that cyber insurance policies often lack standardisation and fail to incentivise organisations to adopt security practices aligned with frameworks like ISO 27001 or NIST CSF. Another study emphasises that insurance requirements may be motivated by various other factors (eg, controls that reduce very specific risks, length of policy period, liable risks) rather than improving overall organisational security in a meaningful way. Not only does this gap weaken the argument for cyber insurance improving security, it also poses a risk for businesses. Organisations meeting insurance requirements (which may be minimal in terms of security) may mistakenly believe they are well-protected, only to find themselves vulnerable to attacks that exploit overlooked weaknesses.


The Metamorphosis of Open Source: An Industry in Transition

The rise of artificial intelligence has introduced a new topic to the open source conversation. Unlike traditional software, AI systems include both code and models, data, and training methods, creating complexities that existing open source licenses were not designed to address. Recognizing this gap, the OSI launched the Open Source AI Definition (OSAID) in 2024, marking a pivotal moment in the evolution of open source principles. OSAID v1.0 defines the essential freedoms for AI systems: the rights to use, study, modify, and share AI technologies without restriction. This framework aims to ensure that AI systems labeled as “open source” align with the core values of transparency and collaboration underpinning the movement. However, the journey has not been without challenges. The OSI’s definition has sparked debates, particularly around the legal ambiguities of model weights and data licensing. For instance, while OSAID emphasizes transparency in data sources and methodologies, it does not resolve whether model weights derived from unlicensed data can be freely shared or used commercially. This has left businesses and developers navigating a gray area, where the practical adoption of open source AI models requires careful legal scrutiny.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive.