Showing posts with label SecOps. Show all posts
Showing posts with label SecOps. Show all posts

Daily Tech Digest - January 20, 2026


Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox



The culture you can’t see is running your security operations

Non-observable culture is everything happening inside people’s heads. Their beliefs about cyber risk. Their attitudes toward security. Their values and priorities when security conflicts with convenience or speed. This is where the real decisions get made. You can’t see someone’s belief that “we’re too small to be targeted” or “security is IT’s job, not mine.” You can’t measure their assumption that compliance equals security. You can’t audit their gut feeling that reporting a mistake will hurt their career. But these invisible forces shape every security decision your people make. Non-observable culture includes beliefs about the likelihood and severity of threats. It includes how people weigh security against productivity. It includes their trust in leadership and their willingness to admit mistakes. It includes all the cognitive biases that distort risk perception. ... Implicit culture is the stuff nobody talks about because nobody even realizes it’s there. The unspoken assumptions. The invisible norms. The “way things are done here” that everyone knows but nobody questions. This is the most powerful layer because it operates below conscious awareness. People don’t choose to follow implicit norms. They do. Automatically. Without thinking. Implicit culture includes unspoken beliefs like “security slows us down” or “leadership doesn’t really care about this.” It contains hidden power dynamics that determine who can challenge security decisions and who can’t.


The top 6 project management mistakes — and what to do instead

Project managers are trained to solve project problems. Scope creep. Missed deadlines. Resource bottlenecks. ... Start by helping your teams understand the business context behind the work. What problem are we trying to solve? Why does this project matter to the organization? What outcome are we aiming for? Your teams can’t answer those questions unless you bring them into the strategy conversation. When they understand the business goals, not just the project goals, they can start making decisions differently. Their conversations change to ensure everyone knows why their work matters. ... Right from the start of the project, you need to define not just the business goal but how you’ll measure it was successful in business terms. Did the project reduce cost, increase revenue, improve the customer experience? That’s what you and your peers care about, but often that’s not the focus you ask the project people to drive toward. ... People don’t resist because they’re lazy or difficult. They resist because they don’t understand why it’s happening or what it means for them. And no amount of process will fix that. With an accelerated delivery plan designed to drive business value, your project teams can now turn their attention to bringing people with them through the change process. ... To keep people engaged in the project and help it keep accelerating toward business goals, you need purpose-driven communication designed to drive actions and decisions. 


AI has static identity verification in its crosshairs. Now what?

Identity models based on “joiner–mover–leaver” workflows and static permission assignments cannot keep pace with the fluid and temporary nature of AI agents. These systems assume identities are created carefully, permissions are assigned deliberately, and changes rarely happen. AI changes all of that. An agent can be created, perform sensitive tasks, and terminate within seconds. If your verification model only checks identity at login, you’re leaving the entire session vulnerable. ... Securing AI-driven enterprises requires a shift similar to what we saw in the move from traditional firewalls to zero-trust architectures. We didn’t eliminate networks; we elevated policy and verification to operate continuously at runtime. Identity verification for AI must follow the same path. This means building a system that can: Assign verifiable identities to every human and machine actor; Evaluate permissions dynamically based on context and intent; Enforce least privilege at high velocity; Verify actions, not just entry points; ... This is why frameworks like SPIFFE and modern workload identity systems are receiving so much attention. They treat identity as a short-lived, cryptographically verifiable construct that can be created, used, and retired in seconds, exactly the model AI agents require. Human activity is becoming the minority as autonomous systems that can act faster than we can are being spun up and terminated before governance can keep up. That’s why identity verification must shift from a checkpoint to a real-time trust engine that evaluates every action from every actor, human or AI.


AWS European cloud service launch raises questions over sovereignty

AWS established a new legal entity to operate the European Sovereign Cloud under a separate governance and operational model. The new company is incorporated in Germany and run exclusively by EU residents, AWS said. ... “This is the elephant in the room,” said Rene Buest, senior director analyst at Gartner. There are two main concerns regarding the operation of AWS’s European Sovereign Cloud for businesses in Europe. The first relates to the 2018 US Cloud Act, which could require AWS to disclose customer data stored in Europe to the United States, if requested by US authorities. The second involves the possibility of US government sanctions: If a business that uses AWS services is subject to such sanctions, AWS may be compelled to block that company’s access to its cloud services, even if its data and operations are based in Europe. ... It’s an open question at this stage, said Dario Maisto, senior analyst at Forrester. “Cases will have to be tested in court before we can have a definite answer,” he said. “The legal ownership does matter, and this is one of the points that may not be addressed by the current setup of the AWS sovereign cloud.” AWS’s European Sovereign Cloud represents one of several ways that European business can approach the challenge of digital sovereignty. Gartner identifies a spectrum that ranges from global hyperscaler public cloud services through to regional cloud services that are based on non-hyperscaler technology. 


Why peripheral automation is the missing link in end-to-end digital transformation?

While organisations have successfully modernized their digital cores, the “last mile” of business operations often remains fragmented, manual, and surprisingly analogue. This gap is why Peripheral Automation is emerging not merely as a tactical correction but as the critical missing link in achieving true, end-to-end digital transformation. ... Peripheral Automation offers a strategic resolution to this paradox. It’s an architectural philosophy that advocates “differential innovation.” Rather than disrupting stable cores to accommodate fleeting business needs, organisations build agile, tailored applications and workflows that sit on top of the core systems. This approach treats the enterprise as a layered ecosystem. The core remains the single source of truth, but the periphery becomes the “system of engagement”. By leveraging modern low-code platforms and composable architecture, leaders can deploy lightweight, purpose-built automation tools that address specific friction points without altering the underlying infrastructure. ... Peripheral automation reduces process latency, manual effort, and rework. By addressing specific pain points rather than attempting broad, multi-year system redesigns, companies unlock measurable efficiency in weeks. This precision improves throughput, reduces cycle times, and frees teams to focus on high-value work.


How does agentic ops transform IT troubleshooting?

AI Canvas introduces a fundamentally different user experience for network troubleshooting. Rather than navigating through multiple dashboards and CLI interfaces, engineers interact with a dynamic canvas that populates with relevant widgets as troubleshooting progresses. You could say that the ‘canvas’ part of the name AI Canvas is the most important part of it. That is, AI Canvas is actually a blank canvas every time you start troubleshooting. It fills the canvas with boxes and on the fly widgets, among other things, during the troubleshooting. Sampath confirms this: “When you ask a question, it’s using and picking the right types of tools that it can go and execute on a specific task and calls agents to be able to effectively take a task to completion and returns a response back.” The system can spin up monitoring agents that continuously provide updated information, creating a living troubleshooting environment rather than static reports. ... AI Canvas doesn’t exist in isolation. It builds on Cisco’s existing automation foundation. The company previously launched Workflows, a no-code network automation engine, and AI assistants with specific skills for network operations. “All of the automations that are already baked into the workflows, the skills that were built inside of the assistants, now manifest themselves inside of the canvas,” Sampath details. This creates a continuum from deterministic workflows to semi-autonomous assistants to fully autonomous agentic operations.


UK government launches industry 'ambassadors' scheme to champion software security improvements

"By acting as ambassadors, signatories are committing to a process of transparency, development and continuous improvement. The implementation of this code of practice will take time and, in doing so, may bring to light issues that need to be addressed," DSIT said in a statement confirming the announcement. "Signatories and policymakers will learn from these issues as well as the successes and challenges for each organization and, where appropriate, will share information to help develop and strengthen this government policy." ... The Software Security Code of Practice was unveiled by the NCSC in May last year, setting out a series of voluntary principles defining what good software security looks like across the entire software lifecycle. Aimed at technology providers and organizations that develop, sell, or procure software, the code offers best practices for secure design and development, build-environment security, and secure deployment and maintenance. The code also emphasizes the importance of transparent communication with customers on potential security risks and vulnerabilities. ... “The code moves software security beyond narrow compliance and elevates it to a board-level resilience priority. As supply chain attacks continue to grow in scale and impact, a shared baseline is essential and through our global community and expertise, ISC2 is committed to helping professionals build the skills needed to put secure-by-design principles into practice.”


Privacy teams feel the strain as AI, breaches, and budgets collide

Where boards prioritize privacy, AI use appears more frequently and follows defined direction. Larger enterprises, particularly those with broader risk and compliance functions, also report higher uptake. In smaller organizations, or those where privacy has limited visibility at the leadership level, AI adoption remains tentative. Teams that apply privacy principles throughout system development report higher use of AI for privacy tasks. In these environments, AI supports ongoing work rather than introducing new approaches. ... Respondents working in organizations where privacy has active board backing report more consistent use of privacy by design. Budget stability shows a similar pattern, with better-funded teams reporting stronger integration of privacy into design and engineering work. The study also shows that privacy by design on its own does not stop breaches. Organizations that experienced breaches report similar levels of design practice as those that did not. The data places privacy by design mainly in a governance and compliance role, with limited connection to incident prevention. ... Governance shapes how teams view that risk. Professionals in organizations where privacy lacks board priority report higher expectations of a breach in the coming year. Gaps between privacy strategy and broader business goals also appear alongside higher breach expectations, suggesting that structural alignment influences outlook as much as technical controls. Confidence remains common, even among organizations that have experienced breaches.


Cyber Insights 2026: Information Sharing

The sheer volume of cyber threat intelligence being generated today is overwhelming. “Information sharing channels often help condense inputs and highlight genuine signals amid industry noise,” says Caitlin Condon, VP of security research at VulnCheck. “The very nature of cyber threat intelligence demands validation, context, and comparison. Information sharing allows cybersecurity professionals to more rigorously assess rising threats, identify new trends and deviations, and develop technically comprehensive guidance.” ... “The importance of the Cybersecurity Information Sharing Act of 2015 for U.S. national security cannot be overstated,” says Crystal Morin, cybersecurity strategist at Sysdig. “Without legal protections, many legal departments would advise security teams to pull back from sharing threat intelligence, resulting in slower, more cautious processes. ...” CISOs have developed their own closed communities where they can discuss current incidents with other CISOs. This is done via channels such as Slack, WhatsApp and Signal. Security of the channels is a concern, but who better than multiple CISOs to monitor and control security? ... “Much of today’s threat intelligence remains reactive, driven by short-lived IoCs that do little to help agencies anticipate or disrupt cyberattacks,” comments BeyondTrust’s Greene. “We need to modernize our information-sharing framework to emphasize behavior-based analytics enriched with identity-centric context,” he continues.


Edge AI: The future of AI inference is smarter local compute

The bump in edge AI goes hand in hand with a broader shift in focus from AI training, the act of preparing machine learning (ML) models with the right data, to inference, the practice of actively using models to apply knowledge or make predictions in production. “Advancements in powerful, energy-efficient AI processors and the proliferation of IoT (internet of things) devices are also fueling this trend, enabling complex AI models to run directly on edge devices,” says Sumeet Agrawal ... “The primary driver behind the edge AI boom is the critical need for real-time data processing,” says David. The ability to analyze data on the edge, rather than using centralized cloud-based AI workloads, helps direct immediate decisions at the source. Others agree. “Interest in edge AI is experiencing massive growth,” says Informatica’s Agrawal. For him, reduced latency is a key factor, especially in industrial or automotive settings where split-second decisions are critical. There is also the desire to feed ML models personal or proprietary context without sending such data to the cloud. “Privacy is one powerful driver,” says Johann Schleier-Smith ... A smaller footprint for local AI is helpful for edge devices, where resources like processing capacity and bandwidth are constrained. As such, techniques to optimize SLMs will be a key area to aid AI on the edge. One strategy is quantization, a model compression technique that reduces model size and processing requirements. 

Daily Tech Digest - January 06, 2026


Quote for the day:

"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera



Data 2026 outlook: The rise of semantic spheres of influence

While data started to garnering attention last year, AI and agents continued to suck up the oxygen. Why the urgency of agents? Maybe it’s “fear of missing out.” Or maybe there’s a more rational explanation. According to Amazon Web Services Inc. CEO Matt Garman, agents are the technology that will finally make AI investments pay off. Go to the 12-minute mark in his recent AWS re:Invent conference keynote, and you’ll hear him say just that. But are agents yet ready for prime time? ... And of course, no discussion of agentic interaction with databases is complete without mention of Model Context Protocol. The open-source MCP framework, which Anthropic PBC recently donated to the Linux Foundation, came out of nowhere over the past year to become the de facto standard for how AI models connect with data. ... There were early advances for extending governance to unstructured data, primarily documents. IBM watsonx.governance introduced a capability for curating unstructured data that transforms documents and enriches them by assigning classifications, data classes and business terms to prepare them for retrieval-augmented generation, or RAG. ... But for most organizations lacking deep skills or rigorous enterprise architecture practices, the starting points for defining semantics is going straight to the sources: enterprise applications and/or, alternatively, the newer breed of data catalogs that are branching out from their original missions of locating and/or providing the points of enforcement for data governance. In most organizations, the solution is not going to be either-or.


Engineering Speed at Scale — Architectural Lessons from Sub-100-ms APIs

Speed shapes perception long before it shapes metrics. Users don’t measure latency with stopwatches - they feel it. The difference between a 120 ms checkout step and an 80 ms one is invisible to the naked eye, yet emotionally it becomes the difference between "smooth" and "slightly annoying". ... In high-throughput platforms, latency amplifies. If a service adds 30 ms in normal conditions, it might add 60 ms during peak load, then 120 ms when a downstream dependency wobbles. Latency doesn’t degrade gracefully; it compounds. ... A helpful way to see this is through a "latency budget". Instead of thinking about performance as a single number - say, "API must respond in under 100 ms" - modern teams break it down across the entire request path: 10 ms at the edge; 5 ms for routing; 30 ms for application logic; 40 ms for data access; and 10–15 ms for network hops and jitter. Each layer is allocated a slice of the total budget. This transforms latency from an abstract target into a concrete architectural constraint. Suddenly, trade-offs become clearer: "If we add feature X in the service layer, what do we remove or optimize so we don’t blow the budget?" These conversations - technical, cultural, and organizational - are where fast systems are born. ... Engineering for low latency is really engineering for predictability. Fast systems aren’t built through micro-optimizations - they’re built through a series of deliberate, layered decisions that minimize uncertainty and keep tail latency under control.


Everything you need to know about FLOPs

A FLOP is a single floating‑point operation, meaning one arithmetic calculation (add, subtract, multiply, or divide) on numbers that have decimals. Compute benchmarking is done in floating point/fractional rather than integer/whole numbers because floating point is far more accurate of a measure than integers. A prefix is added to FLOPs to measure how many are performed in a second, starting with mega- (millions) the giga- (billions), tera- (trillions), peta- (quadrillions), and now exaFLOPs (quintillions). ... Floating point in computing starts at FP4, or 4 bits of floating point, and doubles all the way to FP64. There is a theoretical FP128, but it is never used as a measure. FP64 is also referred to as double-precision floating-point format, a 64-bit standard under IEEE 754 for representing real numbers with high accuracy. ... With petaFLOPS and exaFLOPs becoming a marketing term, some hardware vendors have been less than scrupulous in disclosing what level of floating-point operation their benchmarks use. It’s not it’s not uncommon for a company to promote exascale performance and then saying the little fine print that they’re talking about FP8, according to Snell. “It used to be if someone said exaFLOP, you could be pretty confident that they meant exaFLOP according to 64-bit scientific computing, but not anymore, especially in the field of AI, you need to look at what’s going behind that FLOP,” said Snell.


From SBOM to AI BOM: Rethinking supply chain security for AI native software

An effective AI BOM is not a static document generated at release time. It is a lifecycle artifact that evolves alongside the system. At ingestion, it records dataset sources, classifications, licensing constraints, and approval status. During training or fine-tuning, it captures model lineage, parameter changes, evaluation results, and known limitations. At deployment, it documents inference endpoints, identity and access controls, monitoring hooks, and downstream integrations. Over time, it reflects retraining events, drift signals, and retirement decisions. Crucially, each element is tied to ownership. Someone approved the data. Someone selected the base model. Someone accepted the residual risk. This mirrors how mature organizations already think about code and infrastructure, but extends that discipline to AI components that have historically been treated as experimental or opaque. To move from theory to practice, I encourage teams to treat the AI BOM as a “Digital Bill of Lading, a chain-of-custody record that travels with the artifact and proves what it is, where it came from, and who approved it. The most resilient operations cryptographically sign every model checkpoint and the hash of every dataset. By enforcing this chain of custody, they’ve transitioned from forensic guessing to surgical precision. When a researcher identifies a bias or security flaw in a specific open-source dataset, an organization with a mature AI BOM can instantly identify every downstream product affected by that “raw material” and act within hours, not weeks.


Beyond the Firehose: Operationalizing Threat Intelligence for Effective SecOps

Effective operationalization doesn't happen by accident. It requires a structured approach that aligns intelligence gathering with business risks. A framework for operationalizing threat intelligence structures the process from raw data to actionable defence, involving key stages like collection, processing, analysis, and dissemination, often using models like MITRE ATT&CK and Cyber Kill Chain. It transforms generic threat info into relevant insights for your organization by enriching alerts, automating workflows (via SOAR), enabling proactive threat hunting, and integrating intelligence into tools like SIEM/EDR to improve incident response and build a more proactive security posture. ... As intel maturity develops, the framework continuously incorporates feedback mechanisms to refine and adapt to the evolving threat environment. Cross-departmental collaboration is vital, enabling effective information sharing and coordinated response capabilities. The framework also emphasizes contextual integration, allowing organizations to prioritize threats based on their specific impact potential and relevance to critical assets. This ultimately drives more informed security decisions. ... Operationalization should be regarded as an ongoing process rather than a linear progression. If intelligence feeds result in an excessive number of false positives that overwhelm Tier 1 analysts, this indicates a failure in operationalization. It is imperative to institute a formal feedback mechanism from the Security Operations Center to the Intelligence team.


Compliance vs. Creativity: Why Security Needs Both Rule Books and Rebels

One of the most common tensions in the SOC arises from mismatched expectations. Compliance officers focus on control documentation when security teams are focusing on operational signals. For example, a policy may require multi-factor authentication (MFA), but if the system doesn’t generate alerts on MFA fatigue or unusual login patterns, attackers can slip past controls without detection. It’s important to also remember that just because something’s written in a policy doesn’t mean it’s being protected. A control isn’t a detection. It only matters if it shows up in the data. Security teams need to make sure that every big control, like MFA, logging, or encryption, has a signal that tells them when it’s being misused, misconfigured, or ignored. ... In a modern SOC, competing priorities are expected. Analysts want manageable alert volumes, red teams want room to experiment, and managers need to show compliance is covered. And at the top, CISOs need metrics that make sense to the board. However, high-performing teams aren’t the ones that ignore these differences. They, again, focus on alignment. ... The most effective security programs don’t rely solely on rigid policy or unrestricted innovation. They recognize that compliance offers the framework for repeatable success, while creativity uncovers gaps and adapts to evolving threats. When organizations enable both, they move beyond checklist security. 


AI governance through controlled autonomy and guarded freedom

Controlled autonomy in AI governance refers to granting AI systems and their development teams a defined level of independence within clear, pre-established boundaries. The organization sets specific guidelines, standards and checkpoints, allowing AI initiatives to progress without micromanagement but still within a tightly regulated framework. The autonomy is “controlled” in the sense that all activities are subject to oversight, periodic review and strict adherence to organizational policies. ... In practice, controlled autonomy might involve delegated decision-making authority to AI project teams, but with mandatory compliance to risk assessment protocols, ethical guidelines and regulatory requirements. For example, an organization may allow its AI team to choose algorithms and data sources, but require regular reports and audits to ensure transparency and accountability. Automated systems may operate independently, yet their outputs are monitored for biases, errors or security vulnerabilities. ... Deciding between controlled autonomy and guarded freedom in AI governance largely depends on the nature of the enterprise, its industry and the specific risks involved. Controlled autonomy is best suited for sectors where regulatory compliance and risk mitigation are paramount, such as banking, healthcare or government services. ... Both controlled autonomy and guarded freedom offer valuable frameworks for AI governance, each with distinct strengths and potential drawbacks. 


The 20% that drives 80%: Uncovering the secrets of organisational excellence

There are striking universalities in what truly drives impact. The first, which all three prioritise, is the belief that employee experience is inseparable from customer experience. Whether it is called EX = CX or framed differently, the sharp focus on making the workplace purposeful and engaging is foundational. Each business does this in a unique way, but the intent is the same: great employee experience leads to great customer experience. ... The second constant is an unwavering drive for business excellence. This is a nuanced but powerful 20% that shapes 80% of outcomes. Take McDonald’s, for instance: the consistency of quality and service, whether you are in Singapore, India, Japan or the US, is remarkable. Even as we localise, the core excellence remains unchanged. The same is true for Google, where the reliability of Search and breakthroughs in AI define the brand, and for PepsiCo, where high standards across foods and beverages define the brand.  ... The third—and perhaps most challenging—is connectedness. For giants of this scale, fostering deep connections across global, regional and country boundaries, and within and across teams, is crucial. It is about psychological safety, collaboration, and creating space for people to connect and recognise each other. This focus on connectedness enables the other two priorities to flourish. If organisations keep these three at the heart of their practice, they remain agile, resilient, and, as I like to put it, the giants keep dancing.


Turning plain language into firewall rules

A central feature of the design is an intermediate representation that captures firewall policy intent in a vendor agnostic format. This representation resembles a normalized rule record that includes the five tuple plus additional metadata such as direction, logging, and scheduling. This layer separates intent from device syntax. Security teams can review the intermediate representation directly, since it reflects the policy request in structured form. Each field remains explicit and machine checkable. After the intermediate representation is built, the rest of the pipeline operates through deterministic logic. The current prototype includes a compiler that translates the representation into Palo Alto PAN OS command line configuration. The design supports additional firewall platforms through separate back end modules. ... A vendor specific linter applies rules tied to the target firewall platform. In the prototype, this includes checks related to PAN OS constraints, zone usage, and service definitions. These checks surface warnings that operators can review. A separate safety gate enforces high level security constraints. This component evaluates whether a policy meets baseline expectations such as defined sources, destinations, zones, and protocols. Policies that fail these checks stop at this stage. After compilation, the system runs the generated configuration through a Batfish based simulator. The simulator validates syntax and object references against a synthetic device model. Results appear as warnings and errors for inspection.


Why cybersecurity needs to focus more on investigation and less on just detection and response

The real issue? Many of today’s most dangerous threats are the ones that don’t show up easily on detection radars. Think about the advanced persistent threats (APTs) that remain hidden for months or the zero-day attacks that exploit vulnerabilities no one even knew existed. These threats may slip right past the detection systems because they don’t act in obvious ways. That’s why, in these cases, detection alone isn’t enough. It’s just the first step. ... Think of investigation as the part where you understand the full story. It’s like detective work: not just looking at the footprints, but figuring out where they came from, who’s leaving them, and why they’re trying to break in in the first place. You can’t stop a cyberattack with detection alone if you don’t understand what caused it or how it worked. And if you don’t know the cause, you can’t appropriately respond to the detected threat. ... The cost of neglecting investigation goes beyond just missing a threat. It’s about missed opportunities for learning and growth. Every attack offers a lesson. By investigating the full scope of a breach, you gain insights that not only help in responding to that incident but also prepare you to defend against future ones. It’s about building resilience, not just reaction. Think about it: If you never investigate an incident thoroughly, you’re essentially ignoring the underlying risk that allowed the threat to flourish. You might fix the hole that was exploited, but you won’t have a clear understanding of why it was there in the first place. 

Daily Tech Digest - November 20, 2025


Quote for the day:

"Choose your heroes very carefully and then emulate them. You will never be perfect, but you can always be better." -- Warren Buffet



A developer’s guide to avoiding the brambles

Protect against the impossible, because it just might happen. Code has a way of surprising you, and it definitely changes. Right now you might think there is no way that a given integer variable would be less than zero, but you have no idea what some crazed future developer might do. Go ahead and guard against the impossible, and you’ll never have to worry about it becoming possible. ... If you’re ever tempted to reuse a variable within a routine for something completely different, don’t do it. Just declare another variable. If you’re ever tempted to have a function do two things depending on a “flag” that you passed in as a parameter, write two different functions. If you have a switch statement that is going to pick from five different queries for a class to execute, write a class for each query and use a factory to produce the right class for the job. ... Ruthlessly root out the smallest of mistakes. I follow this rule religiously when I code. I don’t allow typos in comments. I don’t allow myself even the smallest of formatting inconsistencies. I remove any unused variables. I don’t allow commented code to remain in the code base. If your language of choice is case-insensitive, refuse to allow inconsistent casing in your code. ... Implicitness increases cognitive load. When code does things implicitly, the developer has to stop and guess what the compiler is going to do. Default variables, hidden conversions, and hidden side effects all make code hard to reason about.


SaaS Rolls Forward, Not Backward: Strategies to Prevent Data Loss and Downtime

The SaaS provider owns infrastructure-level redundancy and backups to maintain operational continuity during regional outages or major disruptions. InfoSec and SaaS teams are no longer responsible for infrastructure resilience. Instead, they are responsible for backing up and recovering data and files stored in their SaaS instances. This is significant for two primary reasons. First, the RTO and RPO for SaaS data become dependent on the vendor's capabilities, which are not within the control of the customer. ... A common misconception, even among mature InfoSec teams, is the assumption that SaaS data protection is fully managed by the vendor. This “set it and forget it” mindset, while understandable given the cloud promise, overlooks the need for organizations to backup their SaaS data. Common causes of data loss and corruption are human errors within the customer’s SaaS instance, including accidental deletion, integration issues, and migration mishaps which fall under the customer’s responsibility. ... InfoSec and SaaS teams must combine their knowledge and experience to ensure that backups contain all necessary data, as well as metadata, which provides the necessary context, and can be restored reliably. SaaS administrators can prevent users from logging in, disable automations, block upstream data from being sent, or restrict data from being sent to downstream systems as needed.


EU publishes Digital Omnibus leaving AI Act future uncertain

The European Commission unveiled amendments on Wednesday designed to simplify its digital regulatory framework, including the AI Act and data privacy rules, in a bid to boost innovation. The Digital Omnibus package introduces several measures, including delaying the stricter regulation of ‘high-risk’ AI applications until late 2027 and allowing companies to use sensitive data, such as biometrics, for AI training under certain conditions. ... The Digital Omnibus also attempts to adapt rules within privacy regulation, such as the General Data Protection Regulation (GDPR), the e-Privacy Directive and the Data Act. The Commission plans to clarify when data stops being “personal.” This could open the doors for tech companies to include anonymous information from EU citizens into large datasets for training AI, even when they contain sensitive information such as biometric data, as long as they make reasonable efforts to remove it. ... EU member states have also called for postponing the rollout of the AI Act altogether, citing difficulties in defining related technical standards and the need for Europe to stay competitive in the global technological race. “Europe has not so far reaped the full benefits of the digital revolution,” says European economy commissioner Valdis Dombrovskis. “And we cannot afford to pay the price for failing to keep up with demands of the changing world.”


Building Distributed Event-Driven Architectures Across Multi-Cloud Boundaries

The elegant simplicity of "fire an event and forget" becomes a complex orchestration of latency optimization, failure recovery, and data consistency across provider boundaries. Yet, when done right, multi-cloud event-driven architectures offer unprecedented resilience, performance, and business agility. ... Multi-cloud latency isn't just about network speed, it's about the compound effect of architectural decisions across cloud boundaries. Consider a transaction that needs to traverse from on-premise to AWS for risk assessment, then to Azure for analytics processing, and back to on-premise for core banking updates. Each hop introduces latency, but the cumulative effect can transform a sub-100 ms transaction into a multi-second operation. ... Here is an uncomfortable truth: Most resilience strategies focus on the wrong problem. As engineers, we typically put our efforts into handling failures that occur during an outage or when a service component is down. Equally important is how you recover from those failures after the outage is over. This approach to recovery creates systems that "fail fast" but "recover never". ... The combination of event stores, resilient policies, and systematic event replay capabilities creates a distributed system that not only survives failures, but also recovers automatically, which is a critical requirement for multi-cloud architectures. ... While duplicate risk processing merely wastes resources, duplicate financial transactions create regulatory nightmares and audit failures.


For AI to succeed in the SOC, CISOs need to remove legacy walls now

"The legacy SOC, as we know it, can't compete. It's turned into a modern-day firefighter," warned CrowdStrike CEO George Kurtz during his keynote at Fal.Con 2025. "The world is entering an arms race for AI superiority as adversaries weaponize AI to accelerate attacks. In the AI era, security comes down to three things: the quality of your data, the speed of your response, and the precision of your enforcement." Enterprise SOCs average 83 security tools across 29 different vendors, each generating isolated data streams that defy easy integration to the latest generation of AI systems. System fragmentation and lack of integration represent AI's greatest vulnerability, and organizations' most fixable problem. The mathematics of tool sprawl proves devastating. Organizations deploying AI across fragmented toolsets report significantly elevated false-positive rates. ... Getting governance right is one of a CISO's most formidable challenges and often includes removing longstanding roadblocks to make sure their organization can connect and make contributions across the business. ... A CISO's transformation from security gatekeeper to business enabler and strategist is the single best step any security professional can take in their career. CISOS often remark in interviews that the transition from being an app and data disciplinarian to an enabler of new growth with the ultimate goal of showing how their teams help drive revenue was the catalyst their careers needed.


Selling to the CISO: An open letter to the cybersecurity industry

Vendors think they’re selling technology. They’re not. They’re trying to sell confidence to people whose jobs depend on managing the impossible. As a CISO, I buy because I’m trying to reduce the odds that something catastrophic happens on my watch. Every decision is a gamble. There is no “safe” option in this field. I buy to reduce personal and organizational risk, knowing there’s no such thing as perfect protection. Cybersecurity is not a puzzle you solve. It’s a game you play — and it never ends. You make the best moves you can, knowing you’ll never win. Even if I somehow patched every system and closed every gap, the cost of perfection would cripple the company. ... The truth is that most organizations don’t need more tools. They need to get the fundamentals right. If you can patch consistently, maintain good access controls, and segment your networks so you aren’t running flat, you’re ahead of most of the market — no shiny tools required. Strong patching alone will eliminate most of the attack surface that vendors keep promising to “detect.” ... We can’t blame vendors alone. We created the market they’re serving. We bought into the illusion that innovation equals progress. We ignored the fundamentals because they’re hard and unglamorous. We filled our environments with products we couldn’t fully use and called it maturity. We built complexity and called it strategy. Then we act shocked when the same root causes keep taking us down. Good security still starts with good IT. Always has. Always will. If you don’t know what you own, you can’t protect it.


When IT fails, OT pays the price

Criminal groups are now demonstrating a better understanding of industrial dependencies. The Qilin group carried out 63 confirmed attacks against industrial entities since mid 2024 and has focused on energy distribution and water utilities. Their use of Windows and Linux payloads gives them wider reach inside mixed environments. Several incidents involved encryption of shared engineering resources and historian systems, which caused operational delays even when controllers remained untouched. ... Across intrusions, attackers favored techniques that exploit weak segmentation. PowerShell activity made up the largest share of detections, followed by Cobalt Strike. The findings show that adversaries rarely need ICS specific exploits at the start of an attack. They rely on stolen accounts, remote access tools, and administrative shares to move toward engineering assets. ... The vulnerability data reinforces the emphasis on the boundary between enterprise systems and industrial systems. Ongoing exploitation of Cisco ASA and FTD devices, including attacks that modified device firmware. Several critical flaws in SAP NetWeaver and other manufacturing operations software were also exploited, which created direct pivot points into factory workflows. Recent disclosures affecting Rockwell ControlLogix and GuardLogix platforms allow remote code execution or force the controller into a failed state. Attacks on these devices pose immediate availability and safety risks. 


India has the building blocks to influence global standards in AI infrastructure

The convergence of cloud, edge, and connectivity represents the foundation of India’s next AI leap. In a country as geographically and economically diverse as India, AI workloads can’t depend solely on centralized cloud resources. Edge computing allows us to bring compute closer to the source of data be it in a factory, retail store, or farm which reduces latency, lowers costs, and enhances privacy. Cloud provides elasticity and scalability, while secure connectivity ensures that both environments communicate seamlessly. This triad enables an AI model to be trained in the cloud, refined at the edge, and deployed securely across networks unlocking innovation in every geography. We have been building this connected fabric to ensure that access to compute and intelligence isn’t limited by location or scale. ... We see this evolution already unfolding. AI-as-a-Service will thrive when infrastructure, connectivity, and platforms converge under a single, interoperable framework. Each stakeholder; telecoms, data centres, and hyperscalers brings a unique value: scale, proximity, and reach. ... India is already shaping global conversations around digital equity and secure connectivity, and the same potential exists in AI infrastructure. In next 5 years, India could stand out not for the size of its compute capacity but for how effectively it builds an inclusive digital foundation, one that blends cloud, edge, data governance, and innovation seamlessly.


How to Overcome Latency in Your Cyber Career

The presence of latency is not an indictment of your ability. It's a signal that something in your system needs attention. Identifying what creates latency in your professional life and learning how to address it are essential components of long-term growth. With a diagnostic mindset and a willingness to optimize, you can restore throughput and move forward with purpose. ... Career latency often appears when your knowledge no longer reflects current industry expectations. Even highly capable professionals experience slowdown when their technical foundation lags behind evolving practices. ... Unclear goals create misalignment between where you invest your time and where you want to progress. Without a defined direction, you may be working hard but not moving in a way that supports advancement. ... Professionals often operate under heavy workloads that dilute productivity. Too many competing responsibilities, constant context switching or tasks disconnected from your goals can limit your effectiveness and delay growth. ... Career progress can slow when your professional network lacks the signal strength needed to route opportunities in your direction. Without mentorship, community or visibility, growth becomes harder to sustain. ... Missed opportunities often stem from limited readiness. Preparation, bandwidth or timing may be misaligned, and promising chances can disappear before you can act.


Why IT-SecOps Convergence is Non-Negotiable

The message is clear: siloed operations are no longer just inefficient—they’re a security liability. ... The first, and often the most difficult step toward achieving true IT-SecOps convergence, is cultural. For years, IT and security teams have operated in silos, essentially functioning as two different businesses. ... On paper, these Key Performance Indicators (KPIs) appear aligned—both measure speed and efficiency. But in practice, they reflect different views: one is laser-focused on minimizing risk, the other on maximizing uptime. ... The real opportunity lies in establishing a shared mandate. Both teams need to understand that their goals are two sides of the same coin: you can’t have productive systems that aren’t secure, and security that breaks the system isn’t sustainable; therefore, convergence begins not with tools, but with alignment of intent. Once this clicks, both teams begin working from a common set of goals, shared KPIs, and joint decision frameworks. ... The strongest security posture doesn’t come from piling on more tools. It comes from creating continuous alignment between management, security, and user experience. When those three functions operate in sync, IT doesn’t deploy technology that security can’t enforce, security doesn’t introduce controls that slow down work, and users don’t feel the need to bypass policies with shadow apps or risky shortcuts. ... When a unified structure is implemented, policies can be deployed instantly, validated automatically, and adjusted based on real user impact—all without waiting for separate teams to sync.

Daily Tech Digest - January 29, 2025


Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Evil Models and Exploits: When AI Becomes the Attacker

A more structured threat emerges with technologies like the Model Context Protocol (MCP). Originally introduced by Anthropic, MCP allows large language models (LLMs) to interact with host machines via JavaScript APIs. This enables LLMs to perform sophisticated operations by controlling local resources and services. While MCP is being embraced by developers for legitimate use cases, such as automation and integration, its darker implications are clear. An MCP-enabled system could orchestrate a range of malicious activities with ease. Think of it as an AI-powered operator capable of executing everything from reconnaissance to exploitation. ... The proliferation of AI models is both a blessing and a curse. Platforms like Hugging Face host over a million models, ranging from state-of-the-art neural networks to poorly designed or maliciously altered versions. Amid this abundance lies a growing concern: model provenance. Imagine a widely used model, fine-tuned by a seemingly reputable maintainer, turning out to be a tool of a state actor. Subtle modifications in the training data set or architecture could embed biases, vulnerabilities or backdoors. These “evil models” could then be distributed as trusted resources, only to be weaponized later. This risk underscores the need for robust mechanisms to verify the origins and integrity of AI models.


The tipping point for Generative AI in banking

Advancements in AI are allowing banks and other fintechs to embed the technology across their entire value chain. For example, TBC is leveraging AI to make 42% of all payment reminder calls to customers with loans that are up to 30 days or less overdue and is getting ready to launch other AI-enabled solutions. Customers normally cannot differentiate the AI calls powered by our tech from calls by humans, even as the AI calls are ten times more efficient for TBC’s bottom line, compared with human operator calls. Klarna rolled out an AI assistant, which handled 2.3 million conversations in its first month of operation, which accounts for two-thirds of Klarna’s customer service chats or the workload of 700 full-time agents, the company estimated. Deutsche Bank leverages generative AI for software creation and managing adverse media, while the European neobank Bunq applies it to detect fraud. Even smaller regional players, provided they have the right tech talent in place, will soon be able to deploy Gen AI at scale and incorporate the latest innovations into their operations. Next year is set to be a watershed year when this step change will create a clear division in the banking sector between AI-enabled champions and other players that will soon start lagging behind. 


Want to be an effective cybersecurity leader? Learn to excel at change management

Security should never be an afterthought; the change management process shouldn’t be, either, says Michael Monday, a managing director in the security and privacy practice at global consulting firm Protiviti. “The change management process should start early, before changing out the technology or process,” he says. “There should be some messages going out to those who are going to be impacted letting them know, [otherwise] users will be surprised, they won’t know what’s going on, business will push back and there will be confusion.” ... “It’s often the CISO who now has to push these new things,” says Moyle, a former CISO, founding partner of the firm SecurityCurve, and a member of the Emerging Trends Working Group with the professional association ISACA. In his experience, Moyle says he has seen some workers more willing to change than others and learned to enlist those workers as allies to help him achieve his goals. ... When it comes to the people portion, she tells CISOs to “feed supporters and manage detractors.” As for process, “identify the key players for the security program and understand their perspective. There are influencers, budget holders, visionaries, and other stakeholders — each of which needs to be heard, and persuaded, especially if they’re a detractor.”


Preparing financial institutions for the next generation of cyber threats

Collaboration between financial institutions, government agencies, and other sectors is crucial in combating next-generation threats. This cooperative approach enhances the ability to detect, respond to, and mitigate sophisticated threats more effectively. Visa regularly works with international agencies of all sizes to bring cybercriminals to justice. In fact, Visa regularly works alongside law enforcement, including the US Department of Justice, FBI, Secret Service and Europol, to help identify and apprehend fraudsters and other criminals. Visa uses its AI and ML capabilities to identify patterns of fraud and cybercrime and works with law enforcement to find these bad actors and bring them to justice. ... Financial institutions face distinct vulnerabilities compared to other industries, particularly due to their role in critical infrastructure and financial ecosystems. As high-value targets, they manage large sums of money and sensitive information, making them prime targets for cybercriminals. Their operations involve complex and interconnected systems, often including legacy technologies and numerous third-party vendors, which can create security gaps. Regulatory and compliance challenges add another layer of complexity, requiring stringent data protection measures to avoid hefty fines and maintain customer trust.


Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme. However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization. ... Deepfake technology continues to blur the lines between reality and fiction. ... Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC. To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification.


The Product Model and Agile

First, the product model is not new; it’s been out there for more than 20 years. So I have never argued that the product model is “the next new thing,” as I think that’s not true. Strong product companies have been following the product model for decades, but most companies around the world have only recently been exposed to this model, which is why so many people think of it as new. Second, while I know this irritates many people, today there are very different definitions of what it even means to be “Agile.” Some people consider SAFe as Agile. If that’s what you consider Agile, then I would say that Agile plays no part in the product model, as SAFe is pretty much the antithesis of the product model. This difference is often characterized today as “fake Agile” versus “real Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none of the Agile ceremonies, yet you are consistently doing continuous deployment, then at least as far as I’m concerned, you’re running “real Agile.” Third, we should separate the principles of Agile from the various, mostly project management, processes that have been set up around those principles. ... Finally, it’s also important to point out that there is one Agile principle that might be good enough for custom or contract software work, but is not sufficient for commercial product work. This is the principle that “working software is the primary measure of progress.”


Next Generation Observability: An Architectural Introduction

It's always a challenge when creating architectural content, trying to capture real-world stories into a generic enough format to be useful without revealing any organization's confidential implementation details. We are basing these architectures on common customer adoption patterns. That's very different from most of the traditional marketing activities usually associated with generating content for the sole purpose of positioning products for solutions. When you're basing the content on actual execution in solution delivery, you're cutting out the marketing chuff. This observability architecture provides us with a way to map a solution using open-source technologies focusing on the integrations, structures, and interactions that have proven to work at scale. Where those might fail us at scale, we will provide other options. What's not included are vendor stories, which are normal in most marketing content. Those stories that, when it gets down to implementation crunch time, might not fully deliver on their promises. Let's look at the next-generation observability architecture and explore its value in helping our solution designs. The first step is always to clearly define what we are focusing on when we talk about the next-generation observability architecture.


AI SOC Analysts: Propelling SecOps into the future

Traditional, manual SOC processes already struggling to keep pace with existing threats are far outpaced by automated, AI-powered attacks. Adversaries are using AI to launch sophisticated and targeted attacks putting additional pressure on SOC teams. To defend effectively, organizations need AI solutions that can rapidly sort signals from noise and respond in real time. AI-generated phishing emails are now so realistic that users are more likely to engage with them, leaving analysts to untangle the aftermath—deciphering user actions and gauging exposure risk, often with incomplete context. ... The future of security operations lies in seamless collaboration between human expertise and AI efficiency. This synergy doesn't replace analysts but enhances their capabilities, enabling teams to operate more strategically. As threats grow in complexity and volume, this partnership ensures SOCs can stay agile, proactive, and effective. ... Triaging and investigating alerts has long been a manual, time-consuming process that strains SOC teams and increases risk. Prophet Security changes that. By leveraging cutting-edge AI, large language models, and advanced agent-based architectures, Prophet AI SOC Analyst automatically triages and investigates every alert with unmatched speed and accuracy.


Apple researchers reveal the secret sauce behind DeepSeek AI

The ability to use only some of the total parameters of a large language model and shut off the rest is an example of sparsity. That sparsity can have a major impact on how big or small the computing budget is for an AI model. AI researchers at Apple, in a report out last week, explain nicely how DeepSeek and similar approaches use sparsity to get better results for a given amount of computing power. Apple has no connection to DeepSeek, but Apple does its own AI research on a regular basis, and so the developments of outside companies such as DeepSeek are part of Apple's continued involvement in the AI research field, broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv pre-print server, lead author Samir Abnar of Apple and other Apple researchers, along with collaborator Harshay Shah of MIT, studied how performance varied as they exploited sparsity by turning off parts of the neural net. ... Abnar and team ask whether there's an "optimal" level for sparsity in DeepSeek and similar models, meaning, for a given amount of computing power, is there an optimal number of those neural weights to turn on or off? It turns out you can fully quantify sparsity as the percentage of all the neural weights you can shut down, with that percentage approaching but never equaling 100% of the neural net being "inactive."


What Data Literacy Looks Like in 2025

“The foundation of data literacy lies in having a basic understanding of data. Non-technical people need to master the basic concepts, terms, and types of data, and understand how data is collected and processed,” says Li. “Meanwhile, data literacy should also include familiarity with data analysis tools. ... “Organizations should also avoid the misconception that fostering GenAI literacy alone will help developing GenAI solutions. For this, companies need even greater investments in expert AI talent -- data scientists, machine learning engineers, data engineers, developers and AI engineers,” says Carlsson. “While GenAI literacy empowers individuals across the workforce, building transformative AI capabilities requires skilled teams to design, fine-tune and operationalize these solutions. Companies must address both.” ... “Data literacy in 2025 can’t just be about enabling employees to work with data. It needs to be about empowering them to drive real business value,” says Jain. “That’s how organizations will turn data into dollars and ensure their investments in technology and training actually pay off.” ... “Organizations can embed data literacy into daily operations and culture by making data-driven thinking a core part of every role,” says Choudhary.

Daily Tech Digest - April 12, 2024

Architecture is about tradeoffs. It is about spending money on one thing over another. It is about decisions. So when you tell me to develop productivity I think that is a great measure. But I also start wondering about quality. About satisfaction. Is that productivity a measure of one person? Every person? What toolset did they use? The same goes with content generation. An AI image is neat at first? But do we get tired of them? How do I measure the value of human created? Is there profit in that? Order management, electricity use, all of these measures are valuable. So when you hear about an AI business case… do you have a business case? Are the benefits REAL? ... Everything comes with pros/cons and we need a system in place to handle this change rate. This is true of all major human endeavors. Think of child workers during industrialization. Or the horrible cost to humanity of the intensity of urbanization and how it has endangered our planet. Only now are we coming to grips with all of that structural complexity. And even that is going to require decades more commitment. Technology and, specifically, AI is no different. 


The Pitfalls of Periodic Penetration Testing & What to Do Instead

While periodic penetration testing can provide a snapshot of your organization’s security posture, it often fails to account for the dynamic nature of cyber threats. Organizations must continuously test their security measures to effectively mitigate risks to identify and neutralize emerging threats in real-time. Organizations can leverage various approaches and tools to implement continuous cybersecurity testing, such as the Atomic Red Team by Red Canary, an open-source library of tests mapped to the MITRE ATTACK framework that security teams can use to simulate adversarial activity and validate their defenses. These tools can help prioritize and mitigate potential cyber-attacks by automating security testing and providing valuable insights into adversary tactics and techniques. Endpoint security testing and firewall testing are excellent starting points for implementing continuous cybersecurity testing. By simulating phishing emails, running PowerShell commands at endpoints, and monitoring VPN logins at the firewall level, organizations can proactively identify potential vulnerabilities and mitigate them before cyber attackers can exploit them. 


Generative AI Sucks: Meta’s Chief AI Scientist Calls For A Shift To Objective-Driven AI

Unlike current AI, which excels in narrow domains without grasping causality, objective-driven AI would be capable of causal reasoning and understanding the relationships between actions and outcomes. This shift would allow AI to plan and adapt strategies in real time, grounded in a nuanced comprehension of the physical and social world. Objective-driven AI is not just an incremental improvement but a leap toward machines that can truly collaborate with humans, offering insights, generating solutions, and understanding the broader impact of their actions. This vision represents a significant shift towards creating AI that can navigate the complexity of the real world with intelligence and purpose. ... Despite these challenges, LeCun is optimistic about the future, firmly believing that AI will eventually surpass human intelligence across all domains. This conviction is not grounded in wishful thinking but in a clear-eyed assessment of technological progress and the potential for groundbreaking scientific discoveries. However, LeCun also emphasizes that this evolution will not happen overnight or without a radical rethinking of our current approaches to AI development.


Strategies to cultivate collaboration between NetOps and SecOps

Collaborative culture starts at the top. The leaders of these teams need to collaborate and communicate consistently. They cannot have a turf war over each team’s roles and must understand each team’s responsibilities. Whether it’s shadowing a member of the other team for a day or taking opportunities to get to know other teams outside of work, establishing a collaborative culture is an important long-term investment for mutual success. ... AI and automation will blur the lines between these two teams, as projects focused on these elements are ones that can be tackled together. For example, having your vulnerability management tool automatically open tickets for other IT teams can create a feeling that the security team is dumping vulnerabilities over the wall.  ... The SecOps team tends to secure the budget as they take in risks to the company. For instance, if a project is done how does it reduce risks and if the project is not done, what risks does the company retain? The automation and AI tools are using network traffic (packet data) to create workflows/automation and AI tools are using this data to feed into Large Language Models. Both teams can utilize this AI LLM to solve network and security issues.


Down with Detection Obsession: Proactive Security in 2024

Now, as boards of directors and C-suites are expected to be more security savvy, they are asking important risk questions of their CISOs: Given all this spending on finding our problems, are we secure? Are we better off than we were a year ago or two years ago, or three years ago? And few security executives can answer those questions with comfort, because historically they were not focused on addressing risk, they were focused on discovering the risk. As time goes on and the security leader’s role becomes more business-centric, the benefits of taking a more proactive approach to security will continue to grow and shine. For example, the role of vulnerability management in providing improved risk reduction, achieving regulatory compliance, and cost savings. By actively seeking and addressing vulnerabilities, organizations can significantly reduce their overall attack surface, minimizing their chances of security breaches, data leaks and more. Many industries, like health care and financial services, have strict regulations governing the protection of sensitive data.


Agile development can unlock the power of generative AI - here's how

"The beauty of Agile is you see the fruits of your work quicker. You get feedback. And that's true with innovation generally -- the faster you can speed up cycle times, the better." Hakan Yaren, CIO at APL Logistics, said to ZDNET that another benefit of Agile is that it's well-suited to the modern digital environment. Analyst Gartner suggested that 80% of technology products and services this year will be built by people who are not technology professionals. Yaren said Agile -- with its focus on joined-up thinking and cross-business approaches -- is a good fit for the decentralized nature of modern IT. "With AI and cloud, the barriers to entry are becoming lower and people in the business are making IT decisions," he said. "Agile is the right methodology to deal with many of these processes because of the speed of change." However, Yaren has a warning for IT professionals: The complexities you face could increase as more line-of-business employees test emerging technologies. "Trying to connect these solutions, and making sure they're secure, reliable, and you can connect the dots across them, is becoming even more challenging," he said.


The benefits of leveraging hybrid cloud automation

To optimise hybrid cloud architecture, most experts endorse automation, given its flexibility, simplicity and scalability. They believe automation is necessary to draw some of the benefits of the cloud back into the on-premises systems and the hybrid architecture. Automation can ensure a more seamless way for end-users to requisition an organisation’s services, regardless of its location. As more applications move into hybrid and multi-cloud environments, companies can explore several ways to automate manual processes taking place in the cloud. Crucial cloud automation aspects cover deployment, provisioning, compliance, configuration management, scaling, and more. Hybrid cloud automation examples include establishing a network in the cloud and configuring cloud servers. Cloud automation can also be used for managing server capacity, spinning up new environments and resources, configuring software and systems, rolling out software configurations whenever required, taking systems online and offline as needed to balance the load, scaling across data centres, and moving into a public cloud environment when handling front-end web services or high workloads that are on- or off-premises.


Why strategists should embrace imperfection

We’re seeing paralysis as people wait for some kind of equilibrium or stasis to reemerge. Or they get nervous and leap before they look, whether it’s an acquisition or some other move. We wanted to lay out a different path that involves confidently stepping into risk by using a set of six mindsets that we put under the broad heading of imperfectionism. Imperfectionism sounds like a bad thing, but what we mean is accepting the ambiguity of not having perfect knowledge before making strategic moves. ... The kind of uncertainty that we face today really is twofold. One is the type we see in the newspaper, which is economic uncertainty, external shocks like the war in Ukraine. But there’s a much more fundamental kind of uncertainty we face now, which is very rapid technological change. Artificial intelligence, automation, programmable biology, and other disruptions are blurring industry boundaries and what it means to be a competitor in a particular industry. We’re also seeing the rise of supercompetitors like Apple, Amazon, and Google, which can operate across many industry spaces. 


What the American Privacy Rights Act Could Mean for Data Privacy

For companies that collect and monetize consumer data, the APRA could mean making changes to the way they do business. The APRA sets out requirements for issues like data minimization, transparency, consumer choice and rights, data protection, and executive responsibility. “It basically means that now they’re going to be able to collect less data: good for consumers and not so good if you're a company that needs all that data,” Antonio Sanchez, principal cybersecurity evangelist at Fortra, a cybersecurity and automation software company, tells InformationWeek. The draft legislation drills down to data privacy at an operational level. For example, it requires covered entities to appoint a privacy or data security officer or officers. “There is a real sense that a significant part of managing a modern privacy program is not found in the rules themselves but in the operation that gives life to those rules,” says Hughes. If the APRA goes into effect, covered entities will have 180 days to comply with its requirements. Non-compliance after that timeline could be met with enforcement action. 


Data Stewardship Best Practices

Business leaders must understand what makes data stewards successful in order to find the ideal candidates for the role. Johnson outlined some of the characteristics best suited for stewards. Coming from both business and IT: Many times, data stewards do best when they have a background in both technology and line-of-business department work. Johnson referred to them as “purple people” – having skills and experience spanning these two different job positions. Data stewards should be multiskilled, as well as “bilingual” and “bicultural” ... Acting as bridges: Data stewards should be able to translate both simple and complex information and communicate it in written or oral form. Johnson recommended that they also have a good sense of objectivity, distinguishing fact from fiction, and be able to envision what challenges and issues a company might face in the future. Excited by data: Thinking globally and participating in an influence culture, data stewards should get immersed in the ideas surrounding good Data Governance and better data handling. “When you’re talking to somebody, and they get really excited about data and their eyes light up, and they’re all energized and stuff, it’s a good sign – they might be fit for a steward role,” Johnson said.



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson