Showing posts with label standards. Show all posts
Showing posts with label standards. Show all posts

Daily Tech Digest - January 16, 2026


Quote for the day:

"Common sense is something that everyone needs, few have, and none think they lack" -- Benjamin Franklin



If you think agentic AI is a challenge, you’re not ready for what’s coming

The convergence of technology is happening all at once. You’ve got new processes being put in place while simultaneously replacing legacy infrastructure. You’ve got new technology, new talent being rolled into this convergence. Meanwhile, physical AI and quantum are coming quickly on top of agentic. Adaptability is the new job security. The ability to adapt is the most important skill for employees and the most important organizational differentiator. Organizations that can adapt quickly to new technology, redefining processes and training — that’s how they’ll differentiate. The ones that can’t will fall behind. ... It’s becoming not a technology issue as much as a business and process issue. The technology — whether AI, agentic AI, physical AI, or quantum — mostly exists to solve today’s problems. The issue is training, people, and adoption. ... Some industries, like financial services and healthcare [and] precision medicine — financial services has over-invested for decades in data and data quality for compliance reasons. They can use it for AI and quantum. Precision medicine is another category with high data quality. But without the right data, infrastructure, and sandbox, you’ll spread yourself too thin. You may try things, but it doesn’t get you value. Without a defined use case and focus area, you create innovation theater. Companies are getting focused on that first step: What use case am I trying to solve? 


AI Is Compressing the Coding Layer: Here's What Developers Do Next

One of the most encouraging developments in 2025 has been AI's ability to accelerate developer progression and skill growth. In our Q4 survey, 74% of developers said AI strengthened their technical skills. As lower-level execution becomes increasingly automated, developers who can work across systems, evaluate tradeoffs, and guide AI-driven workflows are progressing faster than in previous cycles. ... More than half (55%) also expect AI proficiency to accelerate progression and compensation. This reflects a rising demand for talent that can pair technical depth with architectural and systems thinking. ... Engineering teams are beginning to resemble higher-skill strategic units with stronger cross-functional alignment and architectural leadership. 58% of developers expect teams to become smaller and leaner next year as entry-level coding tasks are increasingly automated. Similarly, more than half (58%) of project managers report that 10-30% of project tasks could be handled by AI-driven workflows in 2026, including documentation generation, automated testing, code completion/refactoring, and requirements/user story drafting. These aren't the most visible tasks, but they've historically consumed a disproportionate share of time. ... To thrive in 2026 and beyond, developers should build competency in orchestrating AI workflows, invest in architectural and systems design literacy, and strengthen their fluency in data engineering, security, and cloud foundations.


Insider risk in an age of workforce volatility

Economic pressures, AI-driven job displacement, and relentless organizational churn are driving insider risk to its highest level in years. Workforce instability erodes loyalty and heightens grievances. The accelerating deployment of powerful new tools, such as AI agents, amplifies the threats from within, both human and machine. ... This surge, up significantly from prior years, creates fertile ground for disgruntlement: financial stress, resentment over automation, and opportunistic behavior, from negligence and careless data handling to deliberate malevolent actions like data exfiltration and credential monetization. ... They are becoming exploitable vectors for silent data exfiltration, disruption, or unintended catastrophe. This is particularly concerning when volatility reduces human oversight and rushes deployment without commensurate controls. Palo Alto Networks’ 2026 cybersecurity predictions emphasize that these agents introduce vulnerabilities such as goal hijacking, tool misuse, prompt injection, and shadow deployment, often amplified by the very churn that drives their adoption across multinational organizations. Security leaders are taking note. ... There is no doubt that such anxiety from ongoing layoffs and role uncertainty can lead to nervous mistakes, privilege hoarding, or rushed workarounds that expose data without intent to harm. Yet harm is actualized. The result is a heightened insider risk landscape that is amplified when the interplay between human churn and machine proliferation is overlooked.


Creating Trust Through Data Is a Long Game — Advantage Solutions CDO

“Trust starts with the rapport with individuals. It starts with listening. It doesn’t start with building solutions.” She highlights that facts alone don’t solve decision-making challenges. Business intuition still matters — but it must be balanced with truth derived from data. “Sometimes the facts alone aren’t enough. There’s a balance between data and the business-led gut experience. All of it is important.” Trust requires time, consistency, and transparency. ... O’Hazo frames AI not as a disruption, but as a spotlight. “AI is almost spotlighting the need for foundational data.” The reason: modern organizations need to answer multidimensional questions, not isolated ones. “It’s no longer a singular flat question. It’s ‘How is X related to Y, and what are the factors that drive growth?’ To answer that, you need data from so many different functions organized and architected the right way.” This interconnection does more than support analytics; it transforms relationships across the business. “When you start to interconnect the data, you naturally and organically have meaningful conversations across functions.” ... Turajski raises the common phrase “source of truth,” asking whether AI has changed how organizations think about it. O’Hazo’s response is clear: AI doesn’t rewrite the rules; it reveals the gaps. “AI is spotlighting, sometimes unfavorably, where the pre-work on the data foundation hasn’t accelerated enough.” This wake-up call has elevated data readiness to board-level priority.


The workforce shift — why CIOs and people leaders must partner harder than ever

For the last decade or so, digital transformation has been framed as a technology challenge. New platforms. Cloud migrations. Data lakes. APIs. Automation. Security layered on top. It was complex, often messy and rarely finished — but the underlying assumption stayed the same: Humans remained at the center of work, with technology enabling them. ... AI is just technology. But it feels human because it has been designed to interact with us in human ways. Large language models combined with domain data create the illusion that AI can do anything. Maybe one day it will. Right now, what it can do is expose how unprepared most organizations are for the scale and pace of change it brings. We are all chasing competitive advantages — revenue growth, margin improvement, improving resilience — and AI is being positioned as the shortcut. But unlike previous waves of automation, this one does not sit neatly inside a single function. ... Perception becomes reality very quickly inside organizations. If people believe AI is a colleague, what does that mean for accountability, trust and decision-making? Who owns outcomes when work is split between humans and machines? These are not abstract questions — they show up in performance, morale and risk. ... For years, organizations have layered technology on top of broken processes. Sometimes that was a conscious trade-off to move faster. Sometimes it was avoidance. Either way, humans could usually compensate.


CIO Playbook for Post-Quantum Security

While the scope of migration to post-quantum cryptography can be daunting, CIOs can follow several practical steps to make the project more manageable, said Sandy Carielli, vice president and principal analyst at Forrester. "There's a process here that's going to need to be addressed in order to get to where the organization needs to be," she said. "Discover, prioritize, remediate and add cryptographic agility." One of the biggest misconceptions she sees from CIOs is on what being ready for quantum-resistant security means. "Sometimes people have the misconception that you need a quantum computer for quantum security," Carielli said. "You don't need quantum computers. And, in fact, you're not going to. You're doing this to be protected." ... Designing for crypto agility is the final step in the process, and organizations should strive to create systems so that algorithm changes necessitate configuration changes, not re-architecting. "Good for crypto agility means that the next time an algorithm is broken, we are able to adapt to that by changing a configuration. We're able to adapt in a matter of weeks, rather than a matter of years," Carielli said. The regulatory impact should make quantum migration an easier sell than it would have been even a few years ago, as deadlines loom in the United States, Australia, EU and Asia countries. "Regardless of when a quantum computer is going to be able to break today's cryptography, we are being asked to migrate by the organizations and the countries that we want to do business with," Carielli said.


When your platform team can’t say yes: How away-teaming unlocks stuck roadmaps

Away teaming inverts the traditional model. Instead of platform engineers embedding with product teams to provide expertise, product engineers temporarily join platform teams to build required capabilities under platform guidance. ... Product teams have already secured funding for their initiatives. Away teaming redirects that investment from building a product-specific solution into creating a reusable platform capability. For platform teams, this expands effective capacity without headcount growth. Platform engineers provide design review, answer questions and conduct code review. ... Product engineers need to view away teaming as a growth opportunity, not a sacrifice. Frame it explicitly as platform engineering experience that builds broader systems thinking skills and deepens architectural understanding. ... Away teaming works best for capabilities in the middle ground: too product-specific for immediate platform prioritization, yet general enough that future products will benefit from reuse. Away teaming also has scale limits. A platform team might effectively support two concurrent away team engagements. Beyond that, guidance capacity becomes strained. ... Product engineers who complete away team assignments become platform advocates. They understand the architectural tradeoffs and can credibly explain platform limitations, reducing tension and frustration between teams.


Forget Predictions: True 2026 Cybersecurity Priorities From Leaders

Most organizations, large and small, are inundated with manual tasks, which makes many of our processes very expensive. This is compounded by economic forces that many organizations face today, which limits their ability to hire additional staff. For years, the industry has been working to solve these problems with SOAR, RPA Bots, or other programmatic solutions to do this bulk work. I think the use of AI extends the work we have already done in that space, but in a broader application. ... The promise of SOAR is centralized orchestration. The reality is months of costly, brittle integration work that breaks with every vendor update. We spend more time maintaining the automation pipeline than the pipeline saves us. We don’t have enough people who can build, train, and maintain sophisticated AI/ML models while understanding threat hunting. The technology requires a new, hyper-specialized skill set, defeating the goal of efficiency. The single most impactful shift for efficiency in 2026 will be the Process and People shift toward Radical Simplification and Security Accountability Diffusion. ... “The shift I’m pushing for is toward collaborative intelligence that actually tells us which threats matter for our specific environment. Context is king here, and I’m encouraged by the emergence of solutions that analyze signals across multiple organizations to provide internet-wide defense. But this only works if we’re all willing to put in what we want to get out of it, meaning reliably sharing intelligence with peers and industry groups, not just consuming it.


DCI launches digital identity interoperability standards for social protection

Authorities are increasingly leveraging digital identification systems to achieve this goal and ensure their social protection (SP) programs are inclusive. ... These open standards provide a trusted mechanism for social protection systems to authenticate individuals and request verified identity data, such as demographic attributes or authentication tokens, in a privacy-preserving way. The standards are not about building ID systems themselves or about integrating with health or education platforms, DCI emphasized. Rather, they’re focused squarely on enabling interoperability between ID and social protection systems. This includes supporting social registries, integrated beneficiary registries and other SP platforms “to connect meaningfully and securely with ID systems.” DCI said the release culminates months of research, peer review and collaboration by a standards committee comprising experts from 20 organizations. By establishing a common technical language, the initiative aims to strengthen digital public infrastructure and foster greater trust in the delivery of social protection programs. ... “Digital transformation of social protection is not an end in itself and it’s not only about cutting costs,” said ILO director Shahra Razavi. “It is about making sure everyone has access to benefits and services, particularly those most at risk of vulnerability and exclusion.”


Data Governance in the AI Era: Are We Solving the Wrong Problem?

The foundation of any effective AI governance model starts with visibility and control. Create a living list of sanctioned AI tools tied to enterprise accounts like personal accounts and shadow IT. Once you have that visibility, it’d be right to require all AI usage through company-issued credentials, ensuring every login is accountable and logged. Users authenticate through your identity provider, and audit trails capture usage patterns. When you can trace who accessed which tool and when, you can create records that support both compliance requirements and incident investigation. ... One of the biggest mistakes organizations make is treating all data the same way, imposing blanket bans that create friction without proportional security benefit. A more effective approach classifies data by sensitivity level and creates rules aligned with that classification. ... If your policy today looks like a wall of “no,” you’re probably protecting yourself from the wrong consequence. The real risk isn’t that AI will suddenly go rogue, it’s more likely that your people will use it without guidance, visibility, or control. Unmanaged adoption creates the very data leakage you’re trying to prevent. And with managed adoption, through clear policy and good governance, creates visibility, accountability, and the ability to detect and respond to actual incidents. Data professionals occupy a critical position in this conversation, they own the data architecture, the classification systems, and the audit trails that make AI governance possible. 

Daily Tech Digest - September 24, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


Managing Technical Debt the Right Way

Here’s the uncomfortable truth: most executives don’t care about technical purity, but they do care about value leakage. If your team can’t deliver new features fast enough, if outages are too frequent, if security holes are piling up, that is financial debt—just wearing a hoodie instead of a suit. The BTABoK approach is to make debt visible in the same way accountants handle real liabilities. Use canvases, views, and roadmaps to connect the hidden cost of debt to business outcomes. Translate debt into velocity lost, time to market, and risk exposure. Then prioritize it just like any other investment. ... If your architects can’t tie debt decisions to value, risk, and strategy, then they’re not yet professionals. Training and certification are not about passing an exam. They are about proving you can handle debt like a surgeon handles risk—deliberately, transparently, and with the trust of society. ... Let’s not sugarcoat it: some executives will always see debt as “nerd whining.” But when you put it into the lifecycle, into the transformation plan, and onto the balance sheet, it becomes a business issue. This is the same lesson learned in finance: debt can be a powerful tool if managed, or a silent killer if ignored. BTABoK doesn’t give you magic bullets. It gives you a discipline and a language to make debt a first-class concern in architectural practice. The rest is courage—the courage to say no to shortcuts that aren’t really shortcuts, to show leadership the cost of delay, and to treat architectural decisions with the seriousness they deserve.


How National AI Clouds Undermine Democracy

The rapid spread of sovereign AI clouds unintentionally creates a new form of unchecked power. It combines state authority with corporate technology in unclear public-private partnerships. This combination centralizes surveillance and decision-making power, extending far beyond effective democratic oversight. The pursuit of national sovereignty undermines the civic sovereignty of individuals. ... The unique and overlooked danger is the rise of a permanent, unelected techno-bureaucracy. Unlike traditional government agencies, these hybrid entities are shielded from democratic pressures. Their technical complexity acts as a barrier against public understanding and journalistic inquiry. ... no sovereign cloud should operate without a corresponding legislative data charter. This charter, passed by the national legislature, must clearly define citizens' rights against algorithmic discrimination, set explicit limits on data use, and create transparent processes for individuals harmed by the system. It should recognize data portability as an essential right, not just a technical feature. ... every sovereign AI initiative should be mandated to serve the public good. These systems must legally demonstrate that they fulfill publicly defined goals, with their performance measured and reported openly. This directs the significant power of AI toward applications that benefit the public, such as enhancing healthcare outcomes or building climate resilience.


IT’s renaissance risks losing steam

IT-enabled value creation will etiolate without the sustained light of stakeholder attention. CIOs need to manage IT signals, symbols, and suppositions with an eye toward recapturing stakeholder headspace. Every IT employee needs to get busy defanging the devouring demons of apathy and ignorance surrounding IT operations today. ... We need to move beyond our “hero on horseback” obsession with single actors. Instead we need to return our efforts forcefully to l’histoire des mentalités — the study of the mental universe of ordinary people. How is l’homme moyen sensual (the man on the street) dealing with the technological choices arrayed before him? ... The IT pundits’ much discussed promise of “technology transformation” will never materialize if appropriate exothermic — i.e., behavior-inducing and energy creating — IT ideas have no mass following among those working at the screens around the world. ... As CIO, have you articulated a clear vision of what you want IT to achieve during your tenure? Have you calmed the anger of unmet expectations, repaired the wounds of system outages, alleviated the doubts about career paths, charted a filled-with-benefits road forward and embodied the hopes of all stakeholders? ... The cognitive elephant in the room that no one appears willing to talk about is the widespread technological illiteracy of the world’s population. 


How One Bad Password Ended a 158-Year-Old Business

KNP's story illustrates a weakness that continues to plague organizations across the globe. Research from Kaspersky analyzing 193 million compromised passwords found that 45% could be cracked by hackers within a minute. And when attackers can simply guess or quickly crack credentials, even the most established businesses become vulnerable. Individual security lapses can have organization-wide consequences that extend far beyond the person who chose "Password123" or left their birthday as their login credential. ... KNP's collapse demonstrates that ransomware attacks create consequences far beyond an immediate financial loss. Seven hundred families lost their primary income source. A company with nearly two centuries of history disappeared overnight. And Northamptonshire's economy lost a significant employer and service provider. For companies that survive ransomware attacks, reputational damage often compounds the initial blow. Organizations face ongoing scrutiny from customers, partners, and regulators who question their security practices. Stakeholders seek accountability for data breaches and operational failures, leading to legal liabilities. ... KNP joins an estimated 19,000 UK businesses that suffered ransomware attacks last year, according to government surveys. High-profile victims have included major retailers like M&S, Co-op, and Harrods, demonstrating that no organization is too large or established to be targeted.


Has the UK’s Cyber Essentials scheme failed?

There are several reasons why larger organisations may steer clear of CE in its current form, explains Kearns. “They typically operate complex, often geographically dispersed networks, where basic technical controls driven by CE do not satisfy organisational appetite to drive down risk and improve resilience,” she says. “The CE control set is also ‘absolute’ and does not allow for the use of compensating controls. Large complex environments, on the other hand, often operate legacy systems that require compensating controls to reduce risk, which prevents compliance with CE.” The point-in-time nature of assessment is also a poor fit for today’s dynamic IT infrastructure and threat environments, argues Pierre Noel, field CISO EMEA at security vendor Expel. ... “For large enterprises with complex IT environments, CE may not be comprehensive enough to address their specific security needs,” says Andy Kays, CEO of MSSP Socura. “Despite these limitations, it still serves a valuable purpose as a baseline, especially for supply chain assurance where larger companies want to ensure their smaller partners have a minimum level of security.” Richard Starnes is an experienced CISO and chair of the WCIT security panel. He agrees that large enterprises should require CE+ certification in their supplier contracts, where it makes sense. “This requirement should also include a contract flow-down to ensure that their suppliers’ downstream partners are also certified,” says Starnes.


Is Your Data Generating Value or Collecting Digital Dust?

Economic uncertainty is prompting many com­panies to think about how to do more with less. But what if they’re actually positioned to do more with more and just don’t realize it? Many organizations already have the resources they need to improve efficiency and resilience in challenging times. Close to two-thirds of organi­zations manage 1 petabyte or more of data, which represents enough data to cover 500 billion standard pages of text. More than 40% of companies store even more data. Much of that data sits unanalyzed while it incurs costs related to collection, compliance, and storage. It also poses data breach risks that require expensive security measures to prevent. ... Engaging with too many apps often makes employees less efficient than they could be. In 2024, companies used an average of 21 apps just for HR tasks. Multiply that across different functions, and it’s easy to see how finding ways to reduce the total could bring down costs. Trimming the number of apps can also increase productivity by reducing employee overwhelm. Constantly switching between different apps and systems has been shown to distract employees while increasing their levels of stress and frustration. Across the orga­nization, switching among tasks and apps consumes 9% of the average employee’s time at work by chipping away at their atten­tion and ability to focus a few seconds at a time with each of the hundreds of tasks switches they perform every day.


The history and future of software development

For any significant piece of software back then, you needed stacks of punch cards. Yes, 1000 lines of code needed 1000 cards. And you needed to have them in order. Now, imagine dropping that stack of 1000 cards! It would take me ages to get them back in order. Devs back then experienced this a lot—so some of them went ahead and had creative ways of indicating the order of these cards. ... y the mid 1970s affordable home computers were starting to become a reality. Instead of a computer just being a work thing, hobbyists started using computers for personal things—maybe we can call these, I don't know...personal computers. ... Assembler and assembly tend to be used interchangeably. But are in reality two different things. Assembly would be the actual language, syntax—instructions being used and would be tightly coupled to the architecture. While the assembler is the piece of software that assembles your assembly code into machine code—the thing your computer knows how to execute. ... What about writing the software? Did they use git back then? No, git only came out in 2005, so back then software version control was quite the manual effort. From developers having their own way of managing source code locally to even having wall charts where developers can "claim" ownership of certain source code files. For those that were able to work on a shared (multi-user) system, or have an early version of some networked storage—Source code sharing was as easy as handing out floppy disks.


Why the operating system is no longer just plumbing

Many enterprises still think of the operating system as a “static” or background layer that doesn’t need active evolution. The reality is that modern operating systems like Red Hat Enterprise Linux (RHEL) are dynamic, intelligent platforms that actively enable and optimize everything running on top of them. Whether you're training AI models, deploying cloud-native applications, or managing edge devices, the OS is making thousands of critical decisions every second about resource allocation, security enforcement, and performance optimization. ... With image mode deployments, zero-downtime updates, and optimized container support, RHEL ensures that even resource-constrained environments can maintain enterprise-grade reliability. We’ve also focused heavily on security—confidential computing, quantum-resistant cryptography, and compliance automation—because edge environments are often exposed to greater risk. These choices allow RHEL to deliver resilience in conditions where compute power, space, and connectivity are limited. ... We don't just take community code and ship it — we validate, harden, and test everything extensively. Red Hat bridges this gap by being an active contributor upstream while serving as an enterprise-grade curator downstream. Our ecosystem partnerships ensure that when new technologies emerge, they work reliably with RHEL from day one.


Ransomware now targeting backups, warns Google’s APAC security chief

Backups often contain sensitive data such as personal information, intellectual property, and financial records. Pereira warned that attackers can use this data as extra leverage or sell it on the dark web. The shift in focus to backup systems underscores how ransomware has become less about disruption and more about business pressure. If an organisation cannot restore its systems independently, it has little choice but to consider paying a ransom. ... Another troubling trend is “cloud-native extortion,” where attackers abuse built-in cloud features, such as encryption or storage snapshots, to hold systems hostage. Pereira explained that many organisations in the region are adapting by shifting to identity-focused security models. “Cloud environments have become the new perimeter, and attackers have been weaponising cloud-native tools,” he said. “We now need to enforce strict cloud security hygiene, such as robust MFA, least privilege access, proactively monitoring of role access changes or credential leaks, using automation to detect and remediate misconfigurations, and anomaly detection tools for cloud activities.” He pointed to rising investments in identity and access management tools, with organisations recognising their role in cutting down the risk of identity-based attacks. For APAC businesses, this means moving away from legacy perimeter defences and embracing cloud-native safeguards that assume breaches are inevitable but limit the damage.


AI Won't Replace Developers, It Will Make the Best Ones Indispensable

The replacement theory assumes AI can work independently, but it can't. Today's AI coding tools don't run themselves, they need active steering. Most AI tools today operate on a "prompt and pray" model: give the AI instructions, get code back, hope it works. That's fine for demos or side projects, but production environments are far less forgiving. ... AI doesn't level the playing field between developers, it widens it. Using AI effectively requires the same skills that make great developers great: understanding system architecture, recognizing security implications, writing maintainable code. ... Tomorrow's junior developers will need to get productive in a different way. Instead of spending months learning basic syntax and patterns, they'll start by learning to collaborate with AI agents effectively. Those who can adapt will find opportunities, and those who can't might struggle to break in. This shift actually creates more demand for senior engineers, because someone needs to train these AI-assisted junior developers, architect systems that can handle AI-generated code at scale, and establish the processes and standards that keep AI tools from creating chaos. ... The teams succeeding with AI coding treat agents like exceptionally capable junior teammates who need oversight. They provide detailed context, review generated code, and test thoroughly before deployment rather than optimizing purely for speed.

Daily Tech Digest - August 05, 2025


Quote for the day:

"Let today be the day you start something new and amazing." -- Unknown


Convergence of Technologies Reshaping the Enterprise Network

"We are now at the epicenter of the transformation of IT, where AI and networking are converging," said Antonio Neri, president and CEO of HPE. "In addition to positioning HPE to offer our customers a modern network architecture alternative and an even more differentiated and complete portfolio across hybrid cloud, AI and networking, this combination accelerates our profitable growth strategy as we deepen our customer relevance and expand our total addressable market into attractive adjacent areas." Naresh Singh, senior director analyst at Gartner, told Information Security Media Group that the merger of two networking heavyweights would make the networking landscape interesting in the near future. ... Security vendors have long tackled cyberthreats through robust portfolios, including next-generation firewalls, endpoint security, secure access service edge, intrusion detection system or intrusion prevention system, software-defined wide area network and network security management. But the rise of AI and large language models has introduced new risks that demand a deeper transformation across people, processes and technology. As organizations recognize the need for a secure foundation, many are accelerating their AI adoption initiatives.


Blind spots at the top: Why leaders fail

You’ve stopped learning. Not because there’s nothing left to learn, but because your ego can’t handle starting from scratch again. You default to what worked five years ago. Meanwhile, your environment has moved on, your competitors have pivoted, and your team can smell the stagnation. Ultimately, you are an architect of resilience and trust. As Alvin Toffler warned, “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” ... Believing you’re always right is a shortcut to irrelevance. When you stop listening, you stop leading. You confuse confidence with competence and dominance with clarity. You bulldoze feedback and mistake silence for agreement. That silence? It’s fear. ... Stress is part of the job. But if every challenge sends you into a spiral, your people will spend more time managing your mood than solving real problems. Fragile leaders don’t scale. Their teams shrink. Their influence dries up. Strong leadership isn’t about acting tough. It’s about staying grounded when things go sideways. ... You think you’re empowering, but you’re micromanaging. You think you’re a visionary, but your team sees a control freak. You think you’re a mentor, but you dominate every meeting. The gap between intent and impact? That’s where teams disengage. The worst part? No one will tell you unless you build a culture where they can.


9 habits of the highly ineffective vibe coder

It’s easy to think that one large language model is the same as any other. The interfaces are largely identical, after all. In goes some text and out comes a magic answer, right? LLMs even tend to give similar answers to easy questions. And their names don’t even tell us much, because most LLM creators choose something cute rather than descriptive. But models have different internal structures, which can affect how well they unpack and understand problems that involve complex logic, like writing code. ... Many developers don’t realize how much LLMs are affected by the size of their input. The model must churn through all the tokens in your prompt before it can generate something that might be useful to you. More input tokens require more resources. Habitually dumping big blocks of code on the LLM can start to add up. Do it too much and you’ll end up overwhelming the hardware and filling up the context window. Some developers even talk about just uploading their entire source folder “just in case.” ... AI assistants do best when they’re focusing our attention on some obscure corner of the software documentation. Or maybe they’re finding a tidbit of knowledge about some feature that isn’t where we expected it to be. They’re amazing at searching through a vast training set for just the right insight. They’re not always so good at synthesizing or offering deep insight, though.


How to Eliminate Deployment Bottlenecks Without Sacrificing Application Security

As organizations embrace DevOps to accelerate innovation, the traditional approach of treating security as a checkpoint begins to break down. The result? Security either slows releases or, even worse, gets bypassed altogether amidst the need to deliver as quickly as possible. ... DevOps has reshaped software delivery, with teams now expected to deploy applications at high velocity, using continuous integration and delivery (CI/CD), microservices architectures, and container orchestration platforms like Kubernetes. But as development practices evolved, many security tools have not kept pace. While traditional Web Application Firewalls (WAFs) remain effective for many use cases, their operational models can become challenging when applied to highly dynamic, modern development environments. In such scenarios, they often introduce delays, limit flexibility, and add operational burden instead of enabling agility. ... Modern architectures introduce constant change. New microservices, APIs, and environments are deployed daily. Traditional WAFs, built for stable applications, rely on domain-first onboarding models that treat each application as an isolated unit. Every new domain or service often requires manual configuration, creating friction and increasing the risk of unprotected assets.


Anthropic wants to stop AI models from turning evil - here's how

In a paper released Friday, the company explores how and why models exhibit undesirable behavior, and what can be done about it. A model's persona can change during training and once it's deployed, when user inputs start influencing it. This is evidenced by models that may have passed safety checks before deployment, but then develop alter egos or act erratically once they're publicly available ... Anthropic admitted in the paper that "shaping a model's character is more of an art than a science," but said persona vectors are another arm with which to monitor -- and potentially safeguard against -- harmful traits. In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example, if it injects an evil prompt into the model, the model will respond from an evil place, confirming a cause-and-effect relationship that makes the roots of a model's character easier to trace. "By measuring the strength of persona vector activations, we can detect when the model's personality is shifting towards the corresponding trait, either over the course of training or during a conversation," Anthropic explained. "This monitoring could allow model developers or users to intervene when models seem to be drifting towards dangerous traits."


From Aspiration to Action: The State of DevOps Automation Today

One of the report's clearest findings is the advantage of engaging QA teams earlier in the development cycle. Teams practicing shift-left testing — bringing QA into planning, design, and early build phases — report higher satisfaction rates and stronger results overall. In fact, 88% of teams with early QA involvement reported satisfaction with their quality processes, and those teams also experienced fewer escaped defects and more comprehensive test coverage. Rather than testing at the end of the development cycle, early QA involvement enables faster feedback loops, better test design, and tighter alignment with user requirements. It also improves collaboration between developers and testers, making it easier to catch potential issues before they escalate into expensive fixes. ... While more DevOps teams recognize the importance of integrating security into the software development lifecycle (SDLC), sizable gaps remain. ... Many organizations still treat security as a separate function, disconnected from their routine QA and DevOps processes. This separation slows down vulnerability detection and remediation. These findings show the need for teams to better integrate security practices earlier in the SDLC, leveraging AI-driven tools that facilitate proactive threat detection and management.


Why the AI era is forcing a redesign of the entire compute backbone

Traditional fault tolerance relies on redundancy among loosely connected systems to achieve high uptime. ML computing demands a different approach. First, the sheer scale of computation makes over-provisioning too costly. Second, model training is a tightly synchronized process, where a single failure can cascade to thousands of processors. Finally, advanced ML hardware often pushes to the boundary of current technology, potentially leading to higher failure rates. ... As we push for greater performance, individual chips require more power, often exceeding the cooling capacity of traditional air-cooled data centers. This necessitates a shift towards more energy-intensive, but ultimately more efficient, liquid cooling solutions, and a fundamental redesign of data center cooling infrastructure. ... One important observation is that AI will, in the end, enhance attacker capabilities. This, in turn, means that we must ensure that AI simultaneously supercharges our defenses. This includes end-to-end data encryption, robust data lineage tracking with verifiable access logs, hardware-enforced security boundaries to protect sensitive computations and sophisticated key management systems. ... The rise of gen AI marks not just an evolution, but a revolution that requires a radical reimagining of our computing infrastructure. 


Industry Leaders Warn MSPs: Rolling Out AI Too Soon Could Backfire

“The biggest risk actually out there is deploying this stuff too soon,” he said. “If you push it really, really hard, your customers are going to be like, ‘This is terrible. I hate it. Why did you do this?’ That will change their opinion on AI for everything moving forward.” The message resonated with other leaders on the panel, including Heddy, who likened AI adoption to on-boarding a new employee. “I would not put my new employees in front of customers until I have educated them,” he said. “And so yes, you should roll [AI] out to your customers only when you are sure that what it is delivering is going to be good.” ... “Everybody’s just sort of siloed in their own little chat box. Wherever this agentic future is, we can all see that’s where it’s going, but at what point do we trust an agent to actually do something? ... “So what are the steps? What is the training that has to happen? How do we have all this information in context for the individual, the team, the entire organization? Where we’re headed is clear. Just … how long does that take?” ... “Don’t wait until you think you have it nailed and are the expert in the world on this to go have a conversation because those who are not experts on it are going to go have conversations with your customers about AI. We should consume it to make ourselves a better company, and then once we understand it well enough to sell it, only then should we go and try to sell it.”


Why Standards and Certification Matter More Than Ever

A major obstacle for enterprise IT teams is the lack of interoperability. Today's networked services span multiple clouds, edge locations and on-premises systems. Each environment brings unique security and compliance needs, making cohesive service delivery difficult. Lifecycle Service Orchestration (LSO), developed and advanced by Mplify, formerly MEF, offers a path through this complexity. With standardized and certified APIs and consistent service definitions, LSO supports automated provisioning and service management across environments and enables seamless interoperability between providers and platforms. ... In a world of constant change, standards and certification are strategic necessities. ... By reuniting around proven frameworks, organizations can modernize more confidently. Certification provides a layer of trust, ensuring solutions meet real-world requirements and work across the environments that enterprises rely on most. ... Standards and certification offer a way to cut through the complexity so networks, services and AI deployments can evolve without introducing new risks. Enterprises that succeed won't be the ones asking whether to adopt LSO, SASE or GPUaaS, but rather finding smart, swift ways to put them into practice.


Security tooling pitfalls for small teams: Cost, complexity, and low ROI

Retrofitting enterprise-grade platforms into SMB environments is often a disaster in the making. These tools are designed for organizations with layers of bureaucracy, complex structures, and entire teams dedicated to each security and compliance function. A large enterprise like Microsoft or Salesforce might have separate teams for governance, risk, compliance, cloud security, network security, and security operations. Each of those teams would own and manage specialized tooling, which in itself assumes domain experts running the show. ... “Compliance is not security” is a statement that sparks heated debates amongst many security experts. However, the reality is that even checklist-based compliance can help companies with no security in place build a strong foundation. Frameworks like SOC 2 and ISO 27001 help establish the baseline of a strong security program, ensuring you have coverage across critical controls. If you deal with Personally Identifiable Information (PII), GDPR is the gold standard for privacy controls. And with AI adoption becoming unavoidable, ISO 42001 is emerging as a key framework for AI governance, helping organizations manage AI risk and build responsible practices from the ground up.

Daily Tech Digest - August 16, 2024

W3C issues new technical draft for verifiable credentials standards

Part of the promise of the W3C standards is the ability to share only the data that’s necessary for a completing a secure digital transaction, Goodwin explained, noting that DHS’s Privacy Office is charged with “embedding and enforcing privacy protections and transparency in all DHS activities.” DHS was brought into the process to review the W3C Verifiable Credentials Data Model and Decentralized Identifiers framework and to advise on potential issues. DHS S&T said in a statement last month that “part of the promise of the W3C standards is the ability to share only the data required for a transaction,” which it sees as “an important step towards putting privacy back in the hands of the people.” “Beyond ensuring global interoperability, standards developed by the W3C undergo wide reviews that ensure that they incorporate security, privacy, accessibility, and internationalization,” said DHS Silicon Valley Innovation Program Managing Director Melissa Oh. “By helping implement these standards in our digital credentialing efforts, S&T, through SVIP, is helping to ensure that the technologies we use make a difference for people in how they secure their digital transactions and protect their privacy.”


Managing Technical Debt in the Midst of Modernization

Rather than delivering a product and then worrying about technical debt, it is more prudent to measure and address it continuously from the early stages of a project, including requirement and design, not just the coding phase. Project teams should be incentivized to identify improvement areas as part of their day-to-day work and implement the fixes as and when possible. Early detection and remediation can help streamline IT operations, improve efficiencies, and optimize cost. ... Inadequate technical knowledge or limited experience in the latest skills itself leads to technical debt. Enterprises must invest and prioritize continuous learning to keep their talent pool up to date with the latest technologies. A skill-gap analysis helps forecast the need for skills for future initiatives. Teams should be encouraged to upskill in AI, cloud, and other latest technologies, as well as modern design and security standards. This will help enterprises address the technical debt skill-gap effectively. Enterprises can also employ a hub and spoke model, where a central team offers automation and expert guidance while each development team maintains their own applications, systems and related technical debt.


Generative AI Adoption: What’s Fueling the Growth?

The banking, financial services, and insurance (BFSI) sector is another area where generative AI is making a significant impact. In this industry, generative AI enhances customer service, risk management, fraud detection, and regulatory compliance. By automating routine tasks and providing more accurate and timely insights, generative AI helps financial institutions improve efficiency and deliver better services to their customers. For instance, generative AI can be used to create personalized customer experiences by analyzing customer data and predicting their needs. This capability allows banks to offer tailored products and services, improving customer satisfaction and loyalty. ... The life sciences sector stands to benefit enormously from the adoption of generative AI. In this industry, generative AI is used to accelerate drug discovery, facilitate personalized medicine, ensure quality management, and aid in regulatory compliance. By automating and optimizing various processes, generative AI helps life sciences companies bring new treatments to market more quickly and efficiently. For instance, generative AI can largely draw on masses of biological data to find a probable medication, much faster than conventional means. 


Overcoming Software Testing ‘Alert Fatigue’

Before “shift left” became the norm, developers would write code that quality assurance testing teams would then comb through and identify the initial bugs in the product. Developers were then only tasked with reviewing the proofed end product to ensure it functioned as they initially envisioned. But now, the testing and quality control onus has been put on developers earlier and earlier. An outcome of this dynamic is that developers are becoming increasingly numb to the high volume of bugs they are coming across in the process, and as a result, they are pushing bad code to production. ... Organizations must ensure that vital testing phases are robust and well-defined to mitigate these adverse outcomes. These phases should include comprehensive automated testing, continuous integration (CI) practices, and rigorous manual testing by dedicated QA teams. Developers should focus on unit and integration tests, while QA teams handle system, regression acceptance, and exploratory testing. This division of labor enables developers to concentrate on writing and refining code while QA specialists ensure the software meets the highest quality standards before production.


SSD capacities set to surge as industry eyes 128 TB drives

Maximum SSD capacity is expected to double from its current 61.44 TB maximum by mid-2025, giving us 122 TB and even 128 TB drives, with the prospect of exabyte-capacity racks. Five suppliers have discussed and/or demonstrated prototypes of 100-plus TB capacity SSDs recently. ... Systems with enclosures full of high-capacity SSDs will need to cope with drive failure and that means RAID or erasure coding schemes. SSD rebuilds take less time than HDD rebuilds but higher-capacity SSDs take longer. Looking at a 61.44 TB Solidigm D5-P5336 drive, its max sequential write bandwidth is 3 GBps. For example, rebuilding a 61.44 TB Solidigm D5-P5336 drive with a max sequential write bandwidth of 3 GBps would take approximately 5.7 hours. A 128 TB drive will take 11.85 hours at the same 3 GBps write rate. These are not insubstantial periods. Kioxia has devised an SSD RAID parity compute offload scheme with a parity compute block in the SSD controller and direct memory access to neighboring SSDs to get the rebuild data. This avoids the host server’s processor getting involved in RAID parity compute IO and could accelerate SSD rebuild speed.


Putting Individuals Back In Charge Of Their Own Identities

Digital identity comprises many signals to ensure it can accurately reflect the real identity of the relevant individual. It includes biometric data, ID data, phone data, and much more. In shareable IDs, these unique features are captured through a combination of AI and biometrics which provide robust protection against forgery and replication, and so provide a high assurance that a person is who they say they are. Importantly, these technologies provide an easy and seamless alternative to other verification processes. For most people, visiting a bank branch to prove their identity with paper documents is no longer convenient, while knowledge-based authentication, like entering your mother’s maiden name, is not viable because data breaches make this information readily for sale to nefarious actors. It’s no wonder that 76% of consumers find biometrics more convenient, while 80% find it more secure than other options.  ... A shareable identity is a user-controlled identity credential that can be stored on a device and used remotely. Individuals can then simply re-use the same digital ID to gain access to services without waiting in line, offering time-saving convenience for all.


Revolutionizing cloud security with AI

Generative AI can analyze data from various sources, including social media, forums, and the dark web. AI models use this data to predict threat vectors and offer actionable insights. Enhanced threat intelligence systems can help organizations better understand the evolving threat landscape and prepare for potential attacks. Moreover, machine learning algorithms can automate threat detection across cloud environments, increasing the efficiency of incident response times. ... AI-driven automation is becoming helpful in handling repetitive security tasks, allowing human security professionals to focus on more complex challenges. Automation helps streamline and triage alerts, incident response, and vulnerability management. AI algorithms can process incident data faster than human operators, enabling quicker resolution and minimizing potential damage. ... AI models can enforce privacy policies by monitoring data access while ensuring compliance with regulations such as the General Data Protection Regulation in the U.K., or the California Consumer Privacy Act. When bolstered by AI, homomorphic encryption and differential privacy techniques offer ways to analyze data while keeping sensitive information secure and anonymous.


Are CIOs at the Helm of Leading Generative AI Agenda?

The growing integration of generative AI into corporate technology and information infrastructures is likely to bring a notable shift to the role of CIOs. While many technology leaders are already spearheading gen AI adoption, their role goes beyond technology management. It now includes driving strategic growth and maintaining a competitive edge in an AI-driven landscape. ... The CIO role has evolved significantly over recent decades. Once focused primarily on maintaining system uptime and availability, CIOs now serve as key business enablers. As technology advances rapidly and organizations increasingly rely on IT, the CIO's influence on enterprise success continues to grow. According to the EY survey, CIOs who report directly to the CEO and co-lead the AI agenda are the most effective in driving strategic change. Sixty-three percent of CIOs are leading the gen AI agenda in their organizations, with CEOs close behind at 55%. Eighty-four percent of organizations where the gen AI agenda is co-led by the CIO and CEO achieve or anticipate achieving a 2x return on investment from gen AI, compared to only 56% of organizations where the agenda is led solely by CIOs.


Intel and Karma partner to develop software-defined car architecture

Instead of all those individual black boxes, each with a single job, the new approach is to consolidate the car's various functions into domains, with each domain being controlled by a relatively powerful car computer. These will be linked via Ethernet, usually with a master domain controller overseeing the entire network. We're already starting to see vehicles designed with this approach; the McLaren Artura, Audi Q6 e-tron, and Porsche Macan are all recent examples of software-defined vehicles. Volkswagen Group—which owns Audi and Porsche—is also investing $5 billion in Rivian specifically to develop a new software-defined vehicle architecture for future electric vehicles. In addition to advantages in processing power and weight savings, software-defined vehicles are easier to update over-the-air, a must-have feature since Tesla changed that paradigm. Karma and Intel say their architecture should also have other efficiency benefits. ... Intel is also contributing its power management SoC to get the most out of inverters, DC-DC converters, chargers, and as you might expect, the domain controllers use Intel silicon as well, apparently with some flavor of AI enabled.


Why the next Ashley Madison is just around the corner

Unfortunately, it’s not a matter of ‘if’ another huge data breach will occur – it’s simply a matter of when. Today organisations of all sizes, not just the big players, have a ticking time bomb on their hands with the potential to detonate their brand reputation and destroy customer loyalty. ... Due to a lack of dedicated cybersecurity teams and finite financial resources to allocate to protective measures, small organisations will often prove easier to successfully infiltrate when compared to the average big player. The potential reward from a single attack may be smaller, but hackers can combine successful attacks against multiple SMEs to match the financial gain of successfully hacking a large organisation, and with far less effort. SMEs are therefore increasingly likely to fall victim to financially crippling attacks, with 46% of all cyber breaches now impacting businesses with fewer than 1,000 employees. ... The very first step in any attack chain is always the use of tools to gather intelligence about the victims systems, version numbers of (not patched) software in use and insecure configuration or programming. Any hacker, whether a professional or amateur, is using scanning bots or relying on websites like Shodan.io, generating an attack list of victims with vulnerable software. 



Quote for the day:

“No one knows how hard you had to fight to become who you are today.” -- Unknown

Daily Tech Digest - March 04, 2024

Evolving Landscape of ISO Standards for GenAI

The burgeoning field of Generative AI (GenAI) presents immense potential for innovation and societal benefit. However, navigating this landscape responsibly requires addressing potential concerns regarding its development and application. Recognizing this need, the International Organization for Standardization (ISO) has embarked on the crucial task of establishing a comprehensive set of standards. ... A shared understanding of fundamental terminology is vital in any field. ISO/IEC 22989 serves as the cornerstone by establishing a common language within the AI community. This foundational standard precisely defines key terms like “artificial intelligence,” “machine learning,” and “deep learning,” ensuring clear communication and fostering collaboration and knowledge sharing among stakeholders. ... Similar to the need for blueprints in construction, ISO/IEC 23053 provides a robust framework for AI development. This standard outlines a generic structure for AI systems based on machine learning (ML) technology. This framework serves as a guide for developers, enabling them to adopt a systematic approach to designing and implementing GenAI solutions. 


Your Face For Sale: Anyone Can Legally Gather & Market Your Facial Data

We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising. Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law. It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws. Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices”. There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions. The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. 


Affective Computing: Scientists Connect Human Emotions With AI

Affective computing is a multidisciplinary field integrating computer science, engineering, psychology, neuroscience, and other related disciplines. A new and comprehensive review on affective computing was recently published in the journal Intelligent Computing. It outlines recent advancements, challenges, and future trends. Affective computing enables machines to perceive, recognize, understand, and respond to human emotions. It has various applications across different sectors, such as education, healthcare, business services and the integration of science and art. Emotional intelligence plays a significant role in human-machine interactions, and affective computing has the potential to significantly enhance these interactions. ... Affective computing, a field that combines technology with the nuanced understanding of human emotions, is experiencing surges in innovation and related ethical considerations. Innovations identified in the review include emotion-generation techniques that enhance the naturalness of human-computer interactions by increasing the realism of the facial expressions and body movements of avatars and robots. 


The open source problem

Over the years, I’ve trended toward permissive, Apache-style licensing, asserting that it’s better for community development. But is that true? It’s hard to argue against the broad community that develops Linux, for example, which is governed by the GPL. Because freedom is baked into the software, it’s harder (though not impossible) to fracture that community by forking the project. To me, this feels critical, and it’s one reason I’m revisiting the importance of software freedom (GPL, copyleft), and not merely developer/user freedom (Apache). If nothing else, as tedious as the internecine bickering was in the early debates between free software and open source (GPL versus Apache), that tension was good for software, generally. It gave project maintainers a choice in a way they really don’t have today because copyleft options disappeared when cloud came along and never recovered. Even corporations, those “evil overlords” as some believe, tended to use free and open source licenses in the pre-cloud world because they were useful. Today companies invent new licenses because the Free Software Foundation and OSI have been living in the past while software charged into the future. Individual and corporate developers lost choice along the way.


Researchers create AI worms that can spread from one system to another

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research. ... To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say. To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.


Do You Overthink? How to Avoid Analysis Paralysis in Decision Making

Welcome to the world of analysis paralysis. This phenomenon occurs when an influx of information and options leads to overthinking, creating a deadlock in decision-making. Decision makers, driven by the fear of making the wrong choice or seeking the perfect solution, may find themselves caught in a loop of analysis, reevaluation, and hesitation, consequently losing sight of the overall goal. ... Analysis paralysis impacts decision making by stifling risk taking, preventing open dialogue, and constraining innovation—all of which are essential elements for successful technology development. It often leads to mental exhaustion, reduced concentration, and increased stress from endlessly evaluating information, also known as decision fatigue. The implications of analysis paralysis include missed opportunities due to ongoing hesitation and innovative potential being restricted by cautious decision making. ... In the technology sector, the consequences of poor decisions can be far-reaching, potentially unraveling extensive work and achievements. Fear of this happening is heightened due to the sector’s competitive nature. Teams worry that a single misstep could have a cascading negative impact.


30 years of the CISO role – how things have changed since Steve Katz

Katz had no idea what the CISO job was when he accepted it in 1995. Neither did Citicorp. “They said you’ve got a blank cheque, build something great — whatever the heck it is,” Katz recounted during the 2021 podcast. “The CEO said, ‘The board has no idea, just go do something.’” Citicorp gave Katz just two directives after hiring him: “Build the best cybersecurity department in the world” and “go out and spend time with our top international banking customers to limit the damage.” ... today’s CISO must be able to communicate cyber threats in terms that line of business can understand almost instantly. “It’s the ability to articulate risk in a way that is related to the business processes in the organization,” says Fitzgerald. “You need to be able to translate what risk means. Does it mean I can’t run business operations? Does it mean we won’t be able to treat patients in our hospital because we had a ransomware attack?” Deaner says CISOs have an obvious role to play in core infosec initiatives such as implementing a business continuity plan or disaster recovery testing. ... “People in CISO circles absolutely talk a lot about liability. We’re all concerned about it,” Deaner acknowledges. “People are taking the changes to those regulations very seriously because they’re there for a reason.”


Vishing, Smishing Thrive in Gap in Enterprise, CSP Security Views

There is a significant gap between enterprises’ high expectations that their communications service provider will provide the security needed to protect them against voice and messaging scams and the level of security those CSPs offer, according to telecom and cybersecurity software maker Enea. Bad actors and state-sponsored threat groups, armed with the latest generative AI tools, are rushing to exploit that gap, a trend that is apparent in the skyrocketing numbers of smishing (text-based phishing) and vishing (voice-based frauds) that are hitting enterprises and the jump in all phishing categories since the November 2022 release of the ChatGPT chatbot by OpenAI, according to a report this week by Enea. ... “Maintaining and enhancing mobile network security is a never-ending challenge for CSPs,” the report’s authors wrote. “Mobile networks are constantly evolving – and continually being threatened by a range of threat actors who may have different objectives, but all of whom can exploit vulnerabilities and execute breaches that impact millions of subscribers and enterprises and can be highly costly to remediate.”


Causal AI: AI Confesses Why It Did What It Did

Traditional AI models are fixed in time and understand nothing. Causal AI is a different animal entirely. “Causal AI is dynamic, whereas comparable tools are static. Causal AI represents how an event impacts the world later. Such a model can be queried to find out how things might work,” says Brent Field at Infosys Consulting. “On the other hand, traditional machine learning models build a static representation of what correlates with what. They tend not to work well when the world changes, something statisticians call nonergodicity,” he says. It’s important to grok why this one point of nonergodicity is such a crucial difference to almost everything we do. “Nonergodicity is everywhere. It’s this one reason why money managers generally underperform the S&P 500 index funds. It’s why election polls are often off by many percentage points. ... Without knowing the cause of an event or potential outcome, the knowledge we extract from AI is largely backward facing even when it is forward predicting. Outputs based on historical data and events alone are by nature handicapped and sometimes useless. Causal AI seeks to remedy that.


Leveraging power quality intelligence to drive data center sustainability

The challenge is that some data centers lack the power monitoring capabilities necessary for achieving heightened efficiency and sustainability. Moreover, there needs to be more continuous power quality monitoring. Many rely on rudimentary measurements, such as voltage, current, and power parameters, gathered by intelligent rack power distribution units (PDUs), which are then transmitted to DCIM, BMS, and other infrastructure management and monitoring systems. Some consider power quality only during initial setup or occasionally revisit it when reconfiguring IT setups. This underscores the critical role of intelligent PDUs in delivering robust power quality monitoring and the imperative for data center and facility managers to steer efforts toward increased efficiency and sustainability. Certain power quality issues can have detrimental effects on the electrical reliability of a data center, leading to costly unplanned downtime and posing challenges in enhancing sustainability. ... These power quality issues can profoundly affect a data center's functionality and dependability. They may result in unforeseen downtime, harm to equipment, data loss or corruption, and reduced network efficiency. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson