Showing posts with label cryptocurrency. Show all posts
Showing posts with label cryptocurrency. Show all posts

Daily Tech Digest - October 10, 2025


Quote for the day:

“Whether you think you can or you think you can’t, you’re right.” -- Henry Ford



Has the value of data increased?

“We’ve seen that AI’s true potential is unlocked by connecting trusted, governed data – structured and unstructured – with real-time analytics and decision intelligence. With the rise of agentic AI, the next wave of value creation will come from intelligent systems that don’t just interpret data, but continuously and autonomously act on it at scale. Put simply, AI isn’t a shortcut to insight – it’s a multiplier of value, if the data is ready. Enterprises that treat data as an afterthought will fall behind, while those that treat it as a strategic asset will lead,” added the Qlik CSO. ... “In this AI economy, compute power may set the pace, but data sets the ceiling. MinIO raises that ceiling, transforming scattered, hard-to-reach datasets into a living, high-performance fabric that fuels every AI prompt and initiative. With MinIO AIStor, organizations gain the ability to store and understand. Data that is secure, fluid, and always ready for action is a competitive weapon,” added Kapoor. ... “Data that is fresh, well described and policy aware beats bigger but blind datasets because it can be safely composed, reused and measured for impact, with the lineage to show teams what to trust and what to fix so they can ship faster,” said Neat. ... While there is no question, really, of whether the value of data has increased and, further, whether the proliferation of AI has been fundamental to that value escalation, the mechanics as variously described here should point us towards the new wave of emerging truths in this space.


Whose Ops is it Anyway? How IDPs, AI and Security are Evolving Developer Culture

For many teams, the problem is not a lack of enthusiasm or ambition but a shortage of resources and skills. They want to automate more, streamline workflows, and adopt new practices, yet often find themselves already operating at full capacity just in keeping existing systems running. In that environment, the slightest of steps toward more advanced automation strategies can feel like a big leap forward. ... On the security side, the logic behind DevSecOps is compelling. More companies are realising that security has to be baked in from day one, not bolted on later. The difficulty lies in making that shift a practical reality, as integrating security checks early in the pipeline often requires new tooling, changes to established workflows, and in some cases, rethinking the roles and responsibilities within the team. ... In many organisations, it is the existing DevOps or platform teams that are best positioned to take on this responsibility, extending their remit into what is often referred to as MLOps. These teams already have experience building and maintaining shared infrastructure, managing pipelines, and ensuring operational stability at scale, so expanding those capabilities to handle data science and machine learning workflows can feel like a natural evolution. ... That said, as adoption grows, we can also expect to see more specialised MLOps roles appearing, particularly in larger enterprises or in organisations where AI is a major strategic focus.


The ultimate business resiliency test: Inside Kantsu’s ransomware response

Kantsu then began collaborating with the police, the cyberattack response teams of the company’s insurers, and security specialists to confirm the scope of cyber insurance coverage and estimate the amount of damage. ... when they began the actual recovery work, they encountered an unexpected pitfall. “We considered how to restore operations as quickly as possible. We did a variety of things, including asking other companies in the same industry to send packages, even ignoring our own profits,” Tatsujo says. ... To prevent reinfection with ransomware, the company prohibited use of old networks and PCs. Tethering was used, with smartphones as Wi-Fi routers. Where possible, this was used to facilitate shipping. New PCs were purchased to create an on-premises environment. ... “In times of emergency like this, the most important thing is cash to recover as quickly as possible, rather than cost reduction. However, insurance companies do not pay claims immediately. ... “In the end, many customers cooperated, which made me really happy. Rakuten Ichiba, in particular, offers a service called ‘Strongest Delivery,’ which allows for next-day delivery and delivery time specification, but they were considerate enough to allow us a grace period in consideration of the delay in delivery,” says President Tatsujo.


Stablecoins: The New Currency of Online Criminals

Practitioners say a cluster of market and technical factors are making stablecoins the payment of choice for cybercriminals and fraudsters. "It's not just the dollar peg that makes stablecoins attractive," said Ari Redbord, vice president and global head of policy and government affairs at TRM Labs. "Liquidity is critical. There are deep pools of stablecoin liquidity on both centralized and decentralized platforms. Settlement speed and irreversibility are also appealing for criminals trying to move large sums quickly," he told Information Security Media Group. The perception of stability - knowing $1 today will likely be $1 tomorrow - often suffices for illicit actors, regardless of an issuer's exact collateral model, he said. This stability and on-chain plumbing create both opportunity and exposure. Redbord said the spike in stablecoin usage is partly because law enforcement agencies around the world have become "exceptionally effective at tracing and seizing bitcoin," and criminals "go where the liquidity and usability are." There is no technical attribute of stablecoins that makes them more appealing to criminals or harder to trace, compared to other cryptocurrencies, Koven said. In practice, public ledgers keep transfers visible; the question then becomes whether investigators have the right tools and the cooperation of the ecosystem's gatekeepers to follow value across chains.


Zero Trust cuts incidents but firms slow to adopt AI security

Zero Trust is increasingly viewed as the standard going forward. As AI-driven threats accelerate, organisations must evaluate security holistically across identity, devices, networks, applications, and data. At DXC, we're helping customers embed Zero Trust into their culture and technology to safeguard operations. Our end-to-end expertise makes it possible to both defend against AI threats and harness secure AI in the same decisive motion. ... New cybersecurity threats are the primary driver for updating Zero Trust frameworks, with 72% of respondents indicating that the evolving threat landscape pushes them to continuously upgrade policies and practices. In addition, more than half of responding organisations recognised improvements in user experience as a secondary benefit of adopting Zero Trust approaches, beyond the gains in security posture. ... Most enterprises already rely on Microsoft Entra ID and Microsoft 365 as the backbone of their IT environments. Building Zero Trust solutions alongside DXC extends that value, enabling tighter integration, simplified operations, and greater visibility and control. By consolidating around the Microsoft stack, organisations can reduce complexity, cut costs, and accelerate their Zero Trust journey. ... Participants in the study agreed that Zero Trust is not a project with a defined end point. Instead, it is an ongoing process that requires continuous monitoring, regular updates, and cultural adaptation.


Overcome Connectivity Challenges for Edge AI

The challenges of AI at the Edge are as large as the advantages, however. One of the biggest challenges and key enablement technologies is connectivity. Edge processing and AI at the Edge require reliability, low latency, and resiliency in the harshest of environments. Without good connections to the network, many of the advantages of Edge AI are diminished, or lost entirely. A truly rugged Edge AI system requires a dual focus on connectivity, according to the experts at ATTEND. It needs both robust external I/O to interface with the outside world, and high-speed, resilient internal interconnects to manage data flow within the computing module. ... The transition to Edge AI is not just a software challenge; it is a hardware and systems engineering challenge. The key to overcoming this dual challenge is to engage with a partner like ATTEND, who will understand that the reliability of an advanced AI model is ultimately dependent on the physical-layer components that capture and transmit its data. By offering a comprehensive portfolio that addresses connectivity from the external sensor to the internal processor module, ATTEND can help you to build end-to-end systems that are both powerful and resilient. To meet with ATTEND and see all that they are doing to advance and enable true intelligence at the Edge, meet with them at embedded world North America in November at the Anaheim Convention Center.


AI Security Goes Mainstream as Vendors Spend Heavily on M&A

One of the most significant operational gaps in AI adoption is the lack of runtime observability, with organizations struggling to know what data a model is ingesting or what it's producing. Observability answers these questions by providing a live view of AI behavior across prompts, responses and system interactions, and it is a precursor to regulating or securing AI systems. ... One of the biggest risks of GenAI in the enterprise is data leakage, with workers inadvertently pasting confidential information into a chatbot, models regurgitating sensitive data it was exposed to during training, or adversaries crafting prompts to extract private information through jailbreaking. Allowing AI access without control is equivalent to opening an unsecured API to your crown jewels. ... Output is just as risky as input with GenAI since an LLM could generate sensitive content, malicious code or incorrect results that are trusted by downstream systems or users. Palo Alto Networks' Arora noted the need for bi-directional inspection to watch not only what goes into large language models, but also what comes out. ... Another key challenge is defining identity in a non-human context, raising questions around how AI agents should be authenticated, what permissions AI agents should have and how to prevent escalation or impersonation. Enterprises must treat bots, copilots, model endpoints and LLM-backed workflows as identity-bearing entities that log in, take action, make decisions and access sensitive data.


Navigating the Techno-Future: Between Promise and Prudence

On one side are the techno-optimists: the believers in inexorable progress, the proponents of markets and innovation as self-correcting forces. They see every challenge as a technical problem and every failure as a design flaw waiting to be solved. On the other side are techno-pessimists: the prophets of collapse who warn that every new tool will inevitably accelerate inequality, erode democracy, or catalyze ecological catastrophe. They see history as a cautionary tale, and the present as a fragile prelude to systemic failure. Both perspectives share a common flaw: they treat the future as preordained. Optimists assume that progress will automatically yield good outcomes; pessimists assume that progress will inevitably lead to harm. Reality, however, is far less deterministic. Technology, in itself, is neutral. It amplifies human choices but does not dictate them. ... Just as a hammer can build a home or inflict injury, a powerful technology like artificial intelligence, gene editing, or blockchain can be used to improve lives or to exacerbate inequalities. The technology does not prescribe its use; humans do. This neutrality is both liberating and daunting. On the one hand, it affirms that progress is not predestined. The future is not a straight line determined by the mere existence of certain tools. 


CISOs prioritise real-time visibility as AI reshapes cloud security

The top priority for CISOs is real-time threat monitoring and comprehensive visibility into all data in motion across their organisations, supporting a defence-in-depth strategy. However, 97 percent of CISOs acknowledged making compromises in areas such as visibility gaps, tool integration and data quality, which they say limit their ability to fully secure and manage hybrid cloud environments. ... The reliance on AI is also causing a revision of how SOCs (security operations centres) function. Almost one in five CISOs reported lacking the appropriate tools to manage the increased network data volumes created by AI, underscoring that legacy log-based tools may not be fit for purpose against AI-powered threats. ... Rising data breaches, with a 17 percent increase year on year, are translating into greater pressure on CISOs, 45 percent of whom said they are now the main person held accountable in the event of a breach. There is also concern about stress and burnout within cybersecurity teams, which is driving a greater embrace of AI-based security tools. ... The adoption of AI is expected to have practical impacts, such as enabling junior analysts to perform at the same level as more experienced team members, reducing training costs, speeding up analysis while investigating threats, and improving overall visibility for the security function.


Serverless Security Risks Are Real, and Hackers Know It

Many believe, “No servers, no security risks.” That’s a myth. Nowadays, attackers take advantage of the specific security weaknesses found in serverless platforms. ... All serverless applications need third-party libraries for operation. Each function that depends on the compromised component becomes vulnerable to attack. An npm package experienced a hijack attack when hackers inserted a secret entry into its system. The incorporation of code by AWS Lambda resulted in the silent extraction of all environment variables. The unauthorized loss of API keys, credentials, and sensitive data, together with all other valuable information. The process finished in milliseconds, which was too brief for any security system to identify. ... As more companies are adopting serverless technologies, security risks become more widespread. So, it’s fundamental to validate that serverless environments are secure. Let’s explore the facts. Research dictates that serverless computing is expected to grow rapidly. According to Gartner’s July 2025 forecast, global IT spending will climb to $5.43 trillion, with enterprises investing billions into AI-driven cloud and data center infrastructure, making serverless platforms an increasingly critical, but often overlooked, security target.

Daily Tech Digest - August 11, 2025


Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


Attackers Target the Foundations of Crypto: Smart Contracts

Central to the attack is a malicious smart contract, written in the Solidity programming language, with obfuscated functionality that transfers stolen funds to a hidden externally owned account (EOA), says Alex Delamotte, the senior threat researcher with SentinelOne who wrote the analysis. ... The decentralized finance (DeFi) ecosystem relies on smart contracts — as well as other technologies such as blockchains, oracles, and key management — to execute transactions, manage data on a blockchain, and allow for agreements between different parties and intermediaries. Yet their linchpin status also makes smart contracts a focus of attacks and a key component of fraud. "A single vulnerability in a smart contract can result in the irreversible loss of funds or assets," Shashank says. "In the DeFi space, even minor mistakes can have catastrophic financial consequences. However, the danger doesn’t stop at monetary losses — reputational damage can be equally, if not more, damaging." ... Companies should take stock of all smart contracts by maintaining a detailed and up-to-date record of all deployed smart contracts, verifying every contract, and conducting periodic audits. Real-time monitoring of smart contracts and transactions can detect anomalies and provide fast response to any potential attack, says CredShields' Shashank.


Is AI the end of IT as we know it?

CIOs have always been challenged by the time, skills, and complexities involved in running IT operations. Cloud computing, low-code development platforms, and many DevOps practices helped IT teams move “up stack,” away from the ones and zeros, to higher-level tasks. Now the question is whether AI will free CIOs and IT to focus more on where AI can deliver business value, instead of developing and supporting the underlying technologies. ... Joe Puglisi, growth strategist and fractional CIO at 10xnewco, offered this pragmatic advice: “I think back to the days when you wrote an assembly and it took a lot of time. We introduced compilers, higher-level languages, and now we have AI that can write code. This is a natural progression of capabilities and not the end of programming.” The paradigm shift suggests CIOs will have to revisit their software development lifecycles for significant shifts in skills, practices, and tools. “AI won’t replace agile or DevOps — it’ll supercharge them with standups becoming data-driven, CI/CD pipelines self-optimizing, and QA leaning on AI for test creation and coverage,” says Dominik Angerer, CEO of Storyblok. “Developers shift from coding to curating, business users will describe ideas in natural language, and AI will build functional prototypes instantly. This democratization of development brings more voices into the software process while pushing IT to focus on oversight, scalability, and compliance.”


From Indicators to Insights: Automating Risk Amplification to Strengthen Security Posture

Security analysts don’t want more alerts. They want more relevant ones. Traditional SIEMs generate events using their own internal language that involve things like MITRE tags, rule names and severity scores. But what frontline responders really want to know is which users, systems, or cloud resources are most at risk right now. That’s why contextual risk modeling matters. Instead of alerting on abstract events, modern detection should aggregate risk around assets including users, endpoints, APIs, or services. This shifts the SOC conversation from “What alert fired?” to “Which assets should I care about today?” ... The burden of alert fatigue isn’t just operational but also emotional. Analysts spend hours chasing shadows, pivoting across tools, chasing one-off indicators that lead nowhere. When everything is an anomaly, nothing is actionable. Risk amplification offers a way to reduce the unseen yet heavy weight on security analysts and the emotional toll it can take by aligning high-risk signals to high-value assets and surfacing insights only when multiple forms of evidence converge. Rather than relying on a single failed login or endpoint alert, analysts can correlate chains of activity whether they be login anomalies, suspicious API queries, lateral movement, or outbound data flows – all of which together paint a much stronger picture of risk.


The Immune System of Software: Can Biology Illuminate Testing?

In software engineering, quality assurance is often framed as identifying bugs, validating outputs, and confirming expected behaviour. But similar to immunology, software testing is much more than verification. It is the process of defining the boundaries of the system, training it to resist failure, and learning from its past weaknesses. Like the immune system, software testing should be multi-layered, adaptive, and capable of evolving over time. ... Just as innate immunity is present from biological birth, unit tests should be present from the birth of our code. Just as innate immunity doesn't need a full diagnostic history to act, unit tests don’t require a full system context. They work in isolation, making them highly efficient. But they also have limits: they can't catch integration issues or logic bugs that emerge from component interactions. That role belongs to more evolved layers. ... Negative testing isn’t about proving what a system can do — it’s about ensuring the system doesn’t do what it must never do. It verifies how the software behaves when exposed to invalid input, unauthorized access, or unexpected data structures. It asks: Does the system fail gracefully? Does it reject the bad while still functioning with the good? Just as an autoimmune disease results from a misrecognition of the self, software bugs often arise when we misrecognise what our code should do and what it should not do.


CSO hiring on the rise: How to land a top security exec role

“Boards want leaders who can manage risk and reputation, which has made soft skills — such as media handling, crisis communication, and board or financial fluency — nearly as critical as technical depth,” Breckenridge explains. ... “Organizations are seeking cybersecurity leaders who combine technical depth, AI fluency, and strong interpersonal skills,” Fuller says. “AI literacy is now a baseline expectation, as CISOs must understand how to defend against AI-driven threats and manage governance frameworks.” ... Offers of top pay and authority to CSO candidates obviously come with high expectations. Organizations are looking for CSOs with a strong blend of technical expertise, business acumen, and interpersonal strength, Fuller says. Key skills include cloud security, identity and access management (IAM), AI governance, and incident response planning. Beyond technical skills, “power skills” such as communication, creativity, and problem-solving are increasingly valued, Fuller explains. “The ability to translate complex risks into business language and influence board-level decisions is a major differentiator. Traits such as resilience, adaptability, and ethical leadership are essential — not only for managing crises but also for building trust and fostering a culture of security across the enterprise,” he says.


From legacy to SaaS: Why complexity is the enemy of enterprise security

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc. Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems. ... The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.


Why Metadata Is the New Interface Between IT and AI

A looming risk in enterprise AI today is using the wrong data or proprietary data in AI data pipelines. This may include feeding internal drafts to a public chatbot, training models on outdated or duplicate data, or using sensitive files containing employee, customer, financial or IP data. The implications range from wasted resources to data breaches and reputational damage. A comprehensive metadata management strategy for unstructured data can mitigate these risks by acting as a gatekeeper for AI workflows. For example, if a company wants to train a model to answer customer questions in a chatbot, metadata can be used to exclude internal files, non-final versions, or documents marked as confidential. Only the vetted, tagged, and appropriate content is passed through for embedding and inference. This is a more intelligent, nuanced approach than simply dumping all available files into an AI pipeline. With rich metadata in place, organizations can filter, sort, and segment data based on business requirements, project scope, or risk level. Metadata augments vector labeling for AI inferencing. A metadata management system helps users discover which files to feed the AI tool, such as health benefits documents in an HR chatbot while vector labeling gives deeper information as to what’s in each document.


Ask a Data Ethicist: What Should You Know About De-Identifying Data?

Simply put, data de-identification is removing or obscuring details from a dataset in order to preserve privacy. We can think about de-identification as existing on a continuum... Pseudonymization is the application of different techniques to obscure the information, but allows it to be accessed when another piece of information (key) is applied. In the above example, the identity number might unlock the full details – Joe Blogs of 123 Meadow Drive, Moab UT. Pseudonymization retains the utility of the data while affording a certain level of privacy. It should be noted that while the terms anonymize or anonymization are widely used – including in regulations – some feel it is not really possible to fully anonymize data, as there is always a non-zero chance of reidentification. Yet, taking reasonable steps on the de-identification continuum is an important part of compliance with requirements that call for the protection of personal data. There are many different articles and resources that discuss a wide variety of types of de-identification techniques and the merits of various approaches ranging from simple masking techniques to more sophisticated types of encryption. The objective is to strike a balance between the complexity of the the technique to ensure sufficient protection, while not being burdensome to implement and maintain.


5 ways business leaders can transform workplace culture - and it starts by listening

Antony Hausdoerfer, group CIO at auto breakdown specialist The AA, said effective leaders recognize that other people will challenge established ways of working. Hearing these opinions comes with an open management approach. "You need to ensure that you're humble in listening, but then able to make decisions, commit, and act," he said. "Effective listening is about managing with humility with commitment, and that's something we've been very focused on recently." Hausdoerfer told ZDNET how that process works in his IT organization. "I don't know the answer to everything," he said. "In fact, I don't know the answer to many things, but my team does, and by listening to them, we'll probably get the best outcome. Then we commit to act." ... Bev White, CEO at technology and talent solutions provider Nash Squared, said open ears are a key attribute for successful executives. "There are times to speak and times to listen -- good leaders recognize which is which," she said. "The more you listen, the more you will understand how people are really thinking and feeling -- and with so many great people in any business, you're also sure to pick up new information, deepen your understanding of certain issues, and gain key insights you need."


Beyond Efficiency: AI's role in reshaping work and reimagining impact

The workplace of the future is not about humans versus machines; it's about humans working alongside machines. AI's real value lies in augmentation: enabling people to do more, do better, and do what truly matters. Take recruitment, for example. Traditionally time-intensive and often vulnerable to unconscious bias, hiring is being reimagined through AI. Today, organisations can deploy AI to analyse vast talent pools, match skills to roles with precision, and screen candidates based on objective data. This not only reduces time-to-hire but also supports inclusive hiring practices by mitigating biases in decision-making. In fact, across the employee lifecycle, it personalises experiences at scale. From career development tools that recommend roles and learning paths aligned with individual aspirations, to chatbots that provide real-time HR support, AI makes the employee journey more intuitive, proactive, and empowering. ... AI is not without its challenges. As with any transformative technology, its success hinges on responsible deployment. This includes robust governance, transparency, and a commitment to fairness and inclusion. Diversity must be built into the AI lifecycle, from the data it's trained on to the algorithms that guide its decisions. 

Daily Tech Digest - July 22, 2025


Quote for the day:

“Being responsible sometimes means pissing people off.” -- Colin Powell


It might be time for IT to consider AI models that don’t steal

One option that has many pros and cons is to use genAI models that explicitly avoid training on any information that is legally dicey. There are a handful of university-led initiatives that say they try to limit model training data to information that is legally in the clear, such as open source or public domain material. ... “Is it practical to replace the leading models of today right now? No. But that is not the point. This level of quality was built on just 32 ethical data sources. There are millions more that can be used,” Wiggins wrote in response to a reader’s comment on his post. “This is a baseline that proves that Big AI lied. Efforts are underway to add more data that will bring it up to more competitive levels. It is not there yet.” Still, enterprises are investing in and planning for genAI deployments for the long term, and they may find in time that ethically sourced models deliver both safety and performance. ... Tipping the scales in the other direction is the big model makers’ promises of indemnification. Some genAI vendors have said they will cover the legal costs for customers who are sued over content produced by their models. “If the model provides indemnification, this is what enterprises should shoot for,” Moor’s Andersen said. 


The unique, mathematical shortcuts language models use to predict dynamic scenarios

One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together. The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. ... “These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”


Role of AI in fortifying cryptocurrency security

In the rapidly expanding realm of Decentralised Finance (DeFi), AI will play a critical role in optimising complex lending, borrowing, and trading protocols. AI can intelligently manage liquidity pools, optimise yield farming strategies for better returns and reduced impermanent loss, and even identify subtle arbitrage opportunities across various platforms. Crucially, AI will also be vital in identifying and mitigating novel types of exploits that are unique to the intricate and interconnected world of DeFi. Looking further ahead, AI will be crucial in developing Quantum-Resistant Cryptography. As quantum computing advances, it poses a theoretical threat to the underlying cryptographic methods that secure current blockchain networks. AI can significantly accelerate the research and development of “post-quantum cryptography” (PQC) algorithms, which are designed to withstand the immense computational power of future quantum computers. AI can also be used to simulate quantum attacks, rigorously testing existing and new cryptographic designs for vulnerabilities. Finally, the concept of Autonomous Regulation could redefine oversight in the crypto space. Instead of traditional, reactive regulatory approaches, AI-driven frameworks could provide real-time, proactive oversight without stifling innovation. 


From Visibility to Action: Why CTEM Is Essential for Modern Cybersecurity Resilience

CTEM shifts the focus from managing IT vulnerabilities in isolation to managing exposure in collaboration, something that’s far more aligned with the operational priorities of today’s organizations. Where traditional approaches center around known vulnerabilities and technical severity, CTEM introduces a more business-driven lens. It demands ongoing visibility, context-rich prioritization, and a tighter alignment between security efforts and organizational impact. In doing so, it moves the conversation from “What’s vulnerable?” to “What actually matters right now?” – a far more useful question when resilience is on the line. What makes CTEM particularly relevant beyond security teams is its emphasis on continuous alignment between exposure data and operational decision-making. This makes it valuable not just for threat reduction, but for supporting broader resilience efforts, ensuring resources are directed toward the exposures most likely to disrupt critical operations. It also complements, rather than replaces, existing practices like attack surface management (ASM). CTEM builds on these foundations with more structured prioritization, validation, and mobilization, turning visibility into actionable risk reduction. 


Driving Platform Adoption: Community Is Your Value

Remember that in a Platform as a Product approach, developers are your customers. If they don’t know what’s available, how to use it or what’s coming next, they’ll find workarounds. These conferences and speaker series are a way to keep developers engaged, improve adoption and ensure the platform stays relevant.There’s a human side to this, too often left out of focusing on “the business value” and outcomes in corporate-land: just having a friendly community of humans who like to spend time with each other and learn. ... Successful platform teams have active platform advocacy. This requires at least one person working full time to essentially build empathy with your users by working with and listening to the people who use your platforms. You may start with just one platform advocate who visits with developer teams, listening for feedback while teaching them how to use the platform and associated methodologies. The advocate acts as both a councilor and delegate for your developers.  ... The journey to successful platform adoption is more than just communicating technical prowess. Embracing systematic approaches to platform marketing that include clear messaging and positioning based on customers’ needs and a strong brand ethos is the key to communicating the value of your platform.


9 AI development skills tech companies want

“It’s not enough to know how a transformer model works; what matters is knowing when and why to use AI to drive business outcomes,” says Scott Weller, CTO of AI-powered credit risk analysis platform EnFi. “Developers need to understand the tradeoffs between heuristics, traditional software, and machine learning, as well as how to embed AI in workflows in ways that are practical, measurable, and responsible.” ... “In AI-first systems, data is the product,” Weller says. “Developers must be comfortable acquiring, cleaning, labeling, and analyzing data, because poor data hygiene leads to poor model performance.” ... AI safety and reliability engineering “looks at the zero-tolerance safety environment of factory operations, where AI failures could cause safety incidents or production shutdowns,” Miller says. To ensure the trust of its customers, IFS needs developers who can build comprehensive monitoring systems to detect when AI predictions become unreliable and implement automated rollback mechanisms to traditional control methods when needed, Miller says. ... “With the rapid growth of large language models, developers now require a deep understanding of prompt design, effective management of context windows, and seamless integration with LLM APIs—skills that extend well beyond basic ChatGPT interactions,” Tupe says.


Why AI-Driven Logistics and Supply Chains Need Resilient, Always-On Networks

Something worth noting about increased AI usage in supply chains is that as AI-enabled systems become more complex, they also become more delicate, which increases the potential for outages. Something as simple as a single misconfiguration or unintentional interaction between automated security gates can lead to a network outage, preventing supply chain personnel from accessing critical AI applications. During an outage, AI clusters (interconnected GPU/TPU nodes used for training and inference) can also become unavailable. .. Businesses must increase network resiliency to ensure their supply chain and logistics teams always have access to key AI applications, even during network outages and other disruptions. One approach that companies can take to strengthen network resilience is to implement purpose-built infrastructure like out of band (OOB) management. With OOB management, network administrators can separate and containerize functions of the management plane, allowing it to operate freely from the primary in-band network. This secondary network acts as an always-available, independent, dedicated channel that administrators can use to remotely access, manage, and troubleshoot network infrastructure.


From architecture to AI: Building future-ready data centers

In some cases, the pace of change is so fast that buildings are being retrofitted even as they are being constructed. Once CPUs are installed, O'Rourke has observed data center owners opting to upgrade racks row by row, rather than converting the entire facility to liquid cooling at once – largely because the building wasn’t originally designed to support higher-density racks. To accommodate this reality, Tate carries out in-row upgrades by providing specialized structures to mount manifolds, which distribute coolant from air-cooled chillers throughout the data halls. “Our role is to support the physical distribution of that cooling infrastructure,” explains O'Rourke. “Manifold systems can’t be supported by existing ceilings or hot aisle containment due to weight limits, so we’ve developed floor-mounted frameworks to hold them.” He adds: “GPU racks also can’t replace all CPU racks one-to-one, as the building structure often can’t support the added load. Instead, GPUs must be strategically placed, and we’ve created solutions to support these selective upgrades.” By designing manifold systems with actuators that integrate with the building management system (BMS), along with compatible hot aisle containment and ceiling structures, Tate has developed a seamless, integrated solution for the white space. 


Weaving reality or warping it? The personalization trap in AI systems

At first, personalization was a way to improve “stickiness” by keeping users engaged longer, returning more often and interacting more deeply with a site or service. Recommendation engines, tailored ads and curated feeds were all designed to keep our attention just a little longer, perhaps to entertain but often to move us to purchase a product. But over time, the goal has expanded. Personalization is no longer just about what holds us. It is what it knows about each of us, the dynamic graph of our preferences, beliefs and behaviors that becomes more refined with every interaction. Today’s AI systems do not merely predict our preferences. They aim to create a bond through highly personalized interactions and responses, creating a sense that the AI system understands and cares about the user and supports their uniqueness. The tone of a chatbot, the pacing of a reply and the emotional valence of a suggestion are calibrated not only for efficiency but for resonance, pointing toward a more helpful era of technology. It should not be surprising that some people have even fallen in love and married their bots. The machine adapts not just to what we click on, but to who we appear to be. It reflects us back to ourselves in ways that feel intimate, even empathic. 


Microsoft Rushes to Stop Hackers from Wreaking Global Havoc

Multiple different hackers are launching attacks through the Microsoft vulnerability, according to representatives of two cybersecurity firms, CrowdStrike Holdings, Inc. and Google's Mandiant Consulting. Hackers have already used the flaw to break into the systems of national governments in Europe and the Middle East, according to a person familiar with the matter. In the US, they've accessed government systems, including ones belonging to the US Department of Education, Florida's Department of Revenue and the Rhode Island General Assembly, said the person, who spoke on condition that they not be identified discussing the sensitive information. ... The breaches have drawn new scrutiny to Microsoft's efforts to shore up its cybersecurity after a series of high-profile failures. The firm has hired executives from places like the US government and holds weekly meetings with senior executives to make its software more resilient. The company's tech has been subject to several widespread and damaging hacks in recent years, and a 2024 US government report described the company's security culture as in need of urgent reforms. ... "There were ways around the patches," which enabled hackers to break into SharePoint servers by tapping into similar vulnerabilities, said Bernard. "That allowed these attacks to happen." 

Daily Tech Digest - April 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy



Critical Thinking In The Age Of AI-Generated Code

Besides understanding our code, code reviewing AI-generated code is an invaluable skill nowadays. Tools like GitHub's Copilot and DeepCode can code-review better than a junior software developer. Depending on the complexity of the codebase, they can save us time in code reviewing and pinpoint cases that we may have missed, but, after all, they are not flawless. We still need to verify that the AI assistant's code review did not provide any false positives or false negatives. We need to verify that the code review did not miss anything important and that the AI assistant got the context correctly. The hybrid approach seems to be the most effective one: let AI handle the grunt work and rely on developers for the critical analysis. ... After all, code reviewing AI-generated code is an excellent opportunity to educate ourselves while improving our code-reviewing skills. Keep in mind that, to date, AI-generated code optimizes for patterns in its training data. This may not be aligned with coding first principles. AI-generated code may follow templated solutions rather than custom designs. It may include unnecessary defensive code or overly generic implementations. We need to check that it has chosen the most appropriate solution for each code block generated. Another common problem is that LLMs may hallucinate.


DeepCoder: Revolutionizing Software Development with Open-Source AI

One of the DeepCoder project’s most significant contributions is the introduction of verl-pipeline, an optimized extension of the very open-source RLHF library. The team identified sampling, the generation of long token sequences as the primary bottleneck in training and developed “one-off pipelining” to address this challenge. This technique overlaps sampling, reward calculation and training, reducing end-to-end training times by up to 2.5x. This optimization is game-changing for coding tasks requiring thousands of unit tests per reinforcement learning iteration, making previously prohibitive training runs accessible to smaller research teams and independent developers. For DevOps professionals, DeepCoder represents an opportunity to integrate advanced code generation directly into CI/CD pipelines without dependency on API-gated services. Teams can fine-tune the model on their codebase, creating customized assistants that understand their specific architecture and coding patterns. ... DeepCoder’s open-source nature aligns with the DevOps collaboration and shared improvement philosophy. As more organizations adopt and contribute to the model, we can expect to see specialized versions emerge for different programming languages and problem domains.


Transforming Software Development

AI assistants are getting smarter, moving beyond prompt-based interactions to anticipate developers’ needs and proactively offer suggestions. This evolution is driven by the rise of AI agents, which can independently execute tasks, learn from their experiences and even collaborate with other agents. Next year, these agents will serve as a central hub for code assistance, streamlining the entire software development lifecycle. AI agents will autonomously write unit tests, refactor code for efficiency and even suggest architectural improvements. Developers’ roles will need to evolve alongside these advancements. AI will not replace them. Far from it; proactive AI assistants and their underlying agents will help developers build new skills and free up their time to focus on higher-value, more strategic tasks. ... AI models are more powerful when trained on internal company data, which allows them to generate insights specific to an organization’s unique operations and objectives. However, this often requires running models on premises for security and compliance reasons. With open source models rapidly closing the performance gap with commercial offerings, more businesses will deploy models on premises in 2025. This will allow organizations to fine-tune models with their own data and deploy AI applications at a fraction of the cost.


Cybercriminal groups embrace corporate structures to scale, sustain operations

We have seen cross collaboration between groups that specialize in specific activities. For example, one group specializes in social engineering, while another focuses on scaling malware and botnets to uncover open servers that yield database breaches. They, in turn, can sell access to those who focus on ransomware attacks. Recently, we have seen collaboration between AL/ML developers who scrape public records to build Org Charts, as well as lists of real estate holdings. This data is then used en masse with situational and location data to populate PDF attachments in emails that look like real invoices, with executives’ names in fake prior email responses, as part of the thread. ... the recent development in hackers organizing into larger groups has allowed the stakes to get even higher. Look at the Lazarus Group, who pulled off one of the largest heists ever by targeting Bybit and stealing $1.5 billion in Ethereum, as well as subsequently converting $300 million in unrecoverable funds. This group is likely state-sponsored and funding North Korean military programs. Therefore, understanding North Korean national interests will hint at future targets. The increasing scale of their attacks likely reflects greater resources allocated by North Korea, more sophisticated tooling and capabilities, lessons learned from previous operations, and a growing number of personnel trained in cyber operations.


Agentic AI might soon get into cryptocurrency trading — what could possibly go wrong?

Not everyone is bullish on the intersection of Web3, agentic AI and blockchain. Forrester Research vice president and principal analyst Martha Bennett is among those who are skeptical. In 2023, she co-authored an online post critical of Worldcoin, now the World project, and her opinion hasn’t changed in several regards. World project still faces major challenges, including privacy issues and concerns about its iris biometric technology, she said. And Agentic AI is still in its early stages and not yet capable of supporting Web3 transactions. Most current generative AI (genAI) tools, including LLMs, lack the autonomy defined as “agentic AI.” “There’s no AI technology today that would be able automate Web3 transactions in a reliable and secure manner,” she said. Given the risks and the potential for exploitation, it’s too soon to rely on AI systems with high autonomy for Web3 transactions. She did note, however, that Web3 already uses automation through smart contracts — self-executing electronic contracts with the terms of the agreement directly written into code. “Will Web3 go mainstream in 2025? My overall answer is no, but there are nuances,” she said. “If mainstream means mass consumer adoption, it’s a definite no. There’s simply not enough utility there for consumers.” Web3, Bennett said, is largely a self-contained financial ecosystem, and efforts to boost adoption through Decentralized Physical Infrastructure Networks (DePIN), such as Tools for Humanity’s, haven’t led to major breakthroughs.


Artificial Intelligence fuels rise of hard-to-detect bots 

“The surge in AI-driven bot creation has serious implications for businesses worldwide,” said Tim Chang, General Manager of Application Security at Thales. “As automated traffic accounts for more than half of all web activity, organisations face heightened risks from bad bots, which are becoming more prolific every day.” ... “This year’s report sheds light on the evolving tactics and techniques utilised by bot attackers. What were once deemed advanced evasion methods have now become standard practice for many malicious bots,” Chang said. “In this rapidly changing environment, businesses must evolve their strategies. It’s crucial to adopt an adaptive and proactive approach, leveraging sophisticated bot detection tools and comprehensive cybersecurity management solutions to build a resilient defense against the ever-shifting landscape of bot-related threats.” ... Analysis in the report reveals a deliberate strategy by cyber attackers to exploit API endpoints that manage sensitive and high-value data. Implications of this trend are especially impactful for industries that rely on APIs for their critical operations and transactions. Financial services, healthcare, and e-commerce sectors are bearing the brunt of these sophisticated bot attacks, making them prime targets for malicious actors seeking to breach sensitive information.


Humans at the helm of an AI-driven grid

A growing number of utilities are turning to AI-based tools to process vast data streams and streamline tasks once managed by manual calculation. For instance, algorithms can analyse weather patterns, historical consumption, and real-time sensor readings to make more accurate power demand and renewable energy generation forecasts. This supports more efficient balancing of supply and demand, reducing the likelihood of overloaded transformers or unexpected brownouts. Some utilities are also exploring AI-driven alarm management, which can filter the flood of alerts triggered by a network issue. Instead of operators sifting through hundreds of notifications, AI tools can be used to identify and highlight the most critical issues in real time. Another AI application is with congestion management, detecting trouble spots on the grid where demand might exceed capacity and even propose rerouting strategies to keep electricity flowing reliably. While still in their early stages, AI tools hold promise for driving operational efficiency in many daily scenarios. ... Even the smartest algorithm, however, lacks the broader perspective and accountability that people bring to grid management. Power and Utility companies are tasked with a public service mandate: they must ensure safety, affordability, and equitable access to electricity.


CISO Conversations: Maarten Van Horenbeeck, SVP & CSO at Adobe

The digital divide is simple to understand but complex to solve. Fundamentally, it separates those who have access to cyber and cyber knowledge from those who do not. There are areas of the world and socio-economic groups or demographics who have little or very limited access to the internet, and consequently very little awareness of cybersecurity. But cyber and cyber threats are worldwide; and technology is increasingly integrated and interconnected globally. “Cyber issues emanating from the digital divide don’t just play out far away from our homes – they play out very close to our homes as well,” warns Van Horenbeeck. “There’s a huge divide between people who know, for example, not to reuse passwords, to use multi factor authentication, and those individuals that have none of that experience at all.” In effect the digital divide creates a largely invisible and unseen threat surface for the long-connected world. He believes that technology companies can play a part in solving this problem by making cybersecurity features easy to understand and use. and cites two examples of the Adobe approach. “We invested, for example, in support for passkeys because we feel it’s a more effective and easier method of authentication that is also more secure.”


How AI, Robotics and Automation Transform Supply Chains

Enterprises designing robots to augment the human workforce need to take design thinking and ergonomic approaches into consideration. Designers must think about how robots comprehend and understand their physical surroundings without tripping over cables or objects on the floor, obstructing movement or causing human injuries. These robots are created with the aim to collaborate with humans for repetitive tasks and lift heavy loads. Last year, OT.today featured stories on how humanoid robots augmented the human workforce at Amazon, Mercedes, NASA and the Piaggio Group. In 2017, Alibaba invested in AI labs and the DAMO Academy. At its flagship Computing Conference in 2018, held in Hangzhou, China, Alibaba showcased a range of robots designed for warehouses, autonomous deliveries and other sectors, including hospitality and pharmaceuticals. More recently, Alibaba invested in LimX Dynamics, a company specializing in humanoid and robotic technology. Japanese automobile manufacturers have been using industrial robots since the early 1980s. Chip manufacturing companies in Taiwan and other countries also use them. Robots assist in surgeries in the healthcare sector. But none of those early manufacturing robots resembled humanoids or even had advanced AI seen in today's robots.


CIOs are overspending on the cloud — but still think it’s worth it

CIOs should also embrace DevOps practices tied to cost reduction when consuming cloud resources, Sellers says. One pitfall that doesn’t get enough attention: Many organizations don’t educate developers on the cost of cloud services, despite the glut of developer services large cloud providers make trivial to call. “I’ve lost track of how many services Amazon provides that developers can just use, and some of those can be quite expensive, but a developer doesn’t really know that,” Sellers says. “They’re like, ‘Instead of writing my own solution to this, I can just call this service that Amazon already provides, and boom, my job is done.’” The disconnect between developers and financial factors in the cloud is a real problem that leads to increased cloud costs, adds Nick Durkin, field CTO at Harness, provider of an AI-driven software development platform. Without knowing the costs of accessing a cloud-based GPU or CPU, for example, a developer is like a home builder who doesn’t know the cost of wood or brick, Durkin says. “If you’re not giving your smartest engineers access to the information about services that they can optimize on, how would you expect them to do it?” he says. “Then, finance comes back a month later with a beating stick.”

Daily Tech Digest - December 25, 2024

The promise and perils of synthetic data

Synthetic data is no panacea, however. It suffers from the same “garbage in, garbage out” problem as all AI. Models create synthetic data, and if the data used to train these models has biases and limitations, their outputs will be similarly tainted. For instance, groups poorly represented in the base data will be so in the synthetic data. “The problem is, you can only do so much,” Keyes said. “Say you only have 30 Black people in a dataset. Extrapolating out might help, but if those 30 people are all middle-class, or all light-skinned, that’s what the ‘representative’ data will all look like.” To this point, a 2023 study by researchers at Rice University and Stanford found that over-reliance on synthetic data during training can create models whose “quality or diversity progressively decrease.” Sampling bias — poor representation of the real world — causes a model’s diversity to worsen after a few generations of training, according to the researchers. Keyes sees additional risks in complex models such as OpenAI’s o1, which he thinks could produce harder-to-spot hallucinations in their synthetic data. These, in turn, could reduce the accuracy of models trained on the data — especially if the hallucinations’ sources aren’t easy to identify.


Federal Privacy Is Inevitable in The US (Prepare Now)

The writing’s on the wall for federal privacy. It’s simply not tenable for almost half the states having varying privacy thresholds and the other half with nothing. Our interconnected business and digital ecosystems need certainty and consistency across the country. Congress can and should stand up for American privacy. The good news? Recent history shows that sweeping reforms are possible. From the CHIPS and Science Act to major pandemic stimulus, lawmakers have shown their ability to meet moments with big regulations. While states deserve credit for filling the privacy void, federal action must follow. For now, there’s no time to waste. Enterprises that build privacy-ready operations today will be better positioned to thrive under future regulations, maintain customer trust, and turn compliance into a competitive advantage. On the other hand, slow-to-move companies risk regulatory penalties and loss of customer confidence in an increasingly privacy-conscious marketplace. Future-forward organizations recognize that investing in privacy isn’t just about compliance; it’s about building a sustainable competitive advantage in the data-driven economy. The choice is clear: invest in privacy now or play catch-up when federal mandates arrive.


AI use cases are going to get even bigger in 2025

Few sectors stand to gain more from AI advancements than defense. “We are witnessing a surge in applications like autonomous drone swarms, electronic spectrum awareness, and real-time battlefield space management, where AI, edge computing, and sensor technologies are integrated to enable faster responses and enhanced precision,” says Meir Friedland, CEO at RF spectrum intelligence company Sensorz. ... “AI is transforming genome sequencing, enabling faster and more accurate analyses of genetic data,” Khalfan Belhoul, CEO at the Dubai Future Foundation, tells Fast Company. “Already, the largest genome banks in the U.K. and the UAE each have over half a million samples, but soon, one genome bank will surpass this with a million samples.” But what does this mean? “It means we are entering an era where healthcare can truly become personalized, where we can anticipate and prevent certain diseases before they even develop,” Belhoul says. ... The potential for AI extends far beyond the use cases dominating today’s headlines. As Friedland notes, “AI’s future lies in multi-domain coordination, edge computing, and autonomous systems.” These advancements are already reshaping industries like manufacturing, agriculture, and finance.


2025 Will Be the Year That AI Agents Transform Crypto

The value of AI agents lies not just in their utility but in their potential to scale human capabilities. Agents are no longer just tools — they are emerging as participants in the on-chain economy, driving innovation across finance, gaming and decentralized social platforms. With protocols such as Virtuals and open-source frameworks like ELIZA, it’s becoming increasingly simple for developers to build, deploy and iterate AI agents that serve an increasingly diverse set of use cases. ... Unlike the core foundational AI models that are developed behind the walled gardens of OpenAI and Anthropic, AI agents are being innovated in the trenches of the crypto world. And for good reason. Blockchains provide the ideal infrastructure as they offer permissionless and frictionless financial rails, enabling agents to seed wallets, transact and send funds autonomously — tasks that would be unfeasible using traditional financial systems. In addition, the open-source nature of crypto allows developers to leverage existing frameworks to launch and iterate on agents faster than ever before. With more no-code platforms like Top Hat gaining traction, it’s only getting easier for anyone to be able to launch an agent in minutes. 


Unpacking OpenAI's Latest Approach to Make AI Safer

OpenAI said it used an internal reasoning model to generate synthetic examples of chain-of-thought responses, each referencing specific elements of the company's safety policy. Another model, referred to as the "judge," evaluated these examples to meet quality standards. The approach looks to address the challenges of scalability and consistency, OpenAI said. Human-labeled datasets are labor-intensive and prone to variability, but properly vetted synthetic data can theoretically offer a scalable solution with uniform quality. The method can potentially optimize training and reduce the latency and computational overhead associated with the models reading lengthy safety documents during inference. OpenAI acknowledged that aligning AI models with human safety values remains a challenge. Users continue to develop jailbreak techniques to bypass safety restrictions, such as framing malicious requests in deceptive or emotionally charged contexts. The o3 series models scored better than its peers Gemini 1.5 Flash, GPT-4o and Claude 3.5 Sonnet on the Pareto benchmark, which measures a model's ability to resist common jailbreak strategies. But the results may be of little consequence, as adversarial attacks evolve alongside improvements in model defenses.


The yellow brick road to agentic AI

Many believe this AI era is the most profound we’ve ever seen in tech. We agree and liken it to mobile’s role in driving on-premises workloads to the cloud and disrupting information technology. But we see this as even more impactful. But for AI agents to work we have to reinvent the software stack and break down 50 years of silo building. The emergence of data lakehouses is not the answer as they are just a bigger siloed asset. Rather, software as a service as we know it will be reimagined. Two prominent chief executives agree. At Amazon Web Services Inc.’s recent AWS re:Invent conference, we sat down with Amazon.com Inc. CEO Andy Jassy. ... There is a clear business imperative behind this shift. We believe companies will differentiate themselves by aligning end-to-end operations with a unified set of plans — from three-year strategic assumptions about demand to real-time, minute-by-minute decisions, such as how to pick, pack and ship individual orders to meet long-term goals. The function of management has always involved planning and resource allocation across various timescales and geographies, but previously there was no software capable of executing on these plans seamlessly across every time horizon.


The AI backlash couldn’t have come at a better time

Developers, engineers, operations personnel, enterprise architects, IT managers, and others need AI to be as boring for them as it has become for consumers. They need it not to be a “thing,” but rather something that is managed and integrated seamlessly into — and supported by — the infrastructure stack and the tools they use to do their jobs. They don’t want to endlessly hear about AI; they just want AI to seamlessly work for them so it just works for customers. ... The models themselves are also, rightly, growing more mainstream. A year ago they were anything but, with talk of potentially gazillions of parameters and fears about the legal, privacy, financial, and even environmental challenges such a data abyss would create. Those LLLMs are still out there, and still growing, but many organizations are looking for their models to be far less extreme. They don’t need (or want) a model that includes everything anyone ever learned about anything; rather, they need models that are fine-tuned with data that is relevant to the business, that don’t necessarily require state-of-the-art GPUs, and that promote transparency and trust. As Matt Hicks, CEO of Red Hat, put it, “Small models unlock adoption.”


Systems Thinking in Leading Transformation for the Future

The first step is aligning your internal goals with your external insights. Leaders must articulate a clear vision that ties the organization's purpose to broader societal and industry trends. For Nooyi and PepsiCo, that meant “starting from the outside.” Nooyi tasked her senior leaders with identifying external factors that would likely impact the company. She said, “They pointed to several megatrends … including a preoccupation with health and wellness, scarcity of water and other natural resources, constraints created by global climate change … and a talent market characterized by shortages of key people.” ... Systems thinking involves understanding the interdependencies within and outside an organization. For example, if you are embarking on any transformation project, you’ll likely need to explore new partnerships with suppliers and regional authorities and regulators. ... Using frameworks like OKRs (Objectives and Key Results), you can evaluate how each initiative within your transformation program contributes to the overarching objective. For example, a laudable main aim such as a commitment to environmental sustainability would likely involve numerous associated projects: for example, water conservation, waste reduction, and reduced carbon footprint.


The 2024 cyberwar playbook: Tricks used by nation-state actors

While nation-state actors loved zero days for swift break-ins, phishing remained a sly plan B. It let them craft sneaky schemes to worm into systems, proving that 2024 was the year of both bold strikes and artful cons. Russian nation-state actors leaned heavily on phishing in 2024, with other APTs, like Iranian and Pakistani groups, dabbling in the tactic as well. The following are some of the standout campaigns from 2024 where phishing was the go-to for initial access. ... While credential harvesting through malware delivered via phishing was fairly common, nation-state actors rarely resorted to scavenging credentials from hack forums or drop sites as a primary tactic. When asked, Hughes noted, “I’m not familiar with this being the primary MO by the APTs, who instead are targeting devices, products and vendors with vulnerabilities and misconfigurations, but once inside, they do compromise credentials and use those to pivot, move laterally, persist in environments and more.” ... These actors weren’t always about flashy, custom malware. Quite often, they used legit tools like PowerShell, rootkits, RDP, and other off-the-shelf system features to sneak in, stay undetected, and set up long-term access. This made their attacks stealthy, persistent, and ready for future moves. 


Generative AI is now a must-have tool for technology professionals

As part of this trend, "we are witnessing developers shift from writing code to orchestrating AI agents," said Jithin Bhasker, general manager and vice president at ServiceNow. The efficiency gained from gen AI adoption by technologists isn't just about personal productivity, it's urgent "with the projected shortage of half a million developers by 2030 and the need for a billion new apps," he added. ... Still, as gen AI becomes a commonplace tool in technology shops, Berent-Spillson advises caution. "The real game-changer here is speed, but there's a catch," he said. "While AI can dramatically compress cycle time, it will also amplify any existing process constraints. Think of it like adding a supercharger to your car -- if your chassis isn't solid, you're just going to get to the problem faster." Exercise caution "regarding code quality, maintainability, and IP considerations," McDonagh-Smith advises. "While syntactically correct, AI tools have been seen to create code that's logically flawed or inefficient, leading to potential code degradation over time if not reviewed carefully. We should also guard against software sprawl where the ease of creating AI-generated code results in overly complex or unnecessary code that might make projects more difficult to maintain over time."



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves