Showing posts with label SOC. Show all posts
Showing posts with label SOC. Show all posts

Daily Tech Digest - August 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


DevSecOps 2.0: How Security-First DevOps Is Redefining Software Delivery

DevSecOps 2.0 is a true security-first revolution. This paradigm shift transforms software security into a proactive enabler, leveraging AI and policy-as-code to automate safeguards at scale. Security tools now blend seamlessly into developer workflows, and continuous compliance ensures real-time auditing. With ransomware, supply chain attacks, and other attacks on the rise, there is a need for a different approach to delivering resilient software. ... It marks a transformative approach to software development, where security is the foundation of the entire lifecycle. This evolution ensures proactive security that works to identify and neutralize threats early. ... AI-driven security is central to DevSecOps 2.0, which harnesses the power of artificial intelligence to transform security from a reactive process into a proactive defense strategy. By analyzing vast datasets, including security logs, network traffic, and code commit patterns, AI can detect subtle anomalies and predict potential threats before they materialize. This predictive capability enables teams to identify risks early, streamlining threat detection and facilitating automated remediation. For instance, AI can analyze commit patterns to predict code sections likely to contain vulnerabilities, allowing for targeted testing and prevention. 


What CIOs can do when AI boosts performance but kills motivation

“One of the clearest signs is copy-paste culture,” Anderson says. “When employees use AI output as-is, without questioning it or tailoring it to their audience, that’s a sign of disengagement. They’ve stopped thinking critically.” To prevent this, CIOs can take a closer look at how teams actually use AI. Honest feedback from employees can help, but there’s often a gap between what people say they use AI for and how they actually use it in practice, so trying to detect patterns of copy-paste usage can help improve workflows. CIOs should also pay attention to how AI affects roles, identities, and team dynamics. When experienced employees feel replaced, or when previously valued skills are bypassed, morale can quietly drop, even if productivity remains high on paper. “In one case, a senior knowledge expert, someone who used to be the go-to for tough questions, felt displaced when leadership started using AI to get direct answers,” Anderson says. “His motivation dropped because he felt his value was being replaced by a tool.” Over time, this expert started to use AI strategically, and saw it could reduce the ad-hoc noise and give him space for more strategic work. “That shift from threatened to empowered is something every leader needs to watch for and support,” he adds.


That ‘cheap’ open-source AI model is actually burning through your compute budget

The inefficiency is particularly pronounced for Large Reasoning Models (LRMs), which use extended “chains of thought” to solve complex problems. These models, designed to think through problems step-by-step, can consume thousands of tokens pondering simple questions that should require minimal computation. For basic knowledge questions like “What is the capital of Australia?” the study found that reasoning models spend “hundreds of tokens pondering simple knowledge questions” that could be answered in a single word. ... The research revealed stark differences between model providers. OpenAI’s models, particularly its o4-mini and newly released open-source gpt-oss variants, demonstrated exceptional token efficiency, especially for mathematical problems. The study found OpenAI models “stand out for extreme token efficiency in math problems,” using up to three times fewer tokens than other commercial models. ... The findings have immediate implications for enterprise AI adoption, where computing costs can scale rapidly with usage. Companies evaluating AI models often focus on accuracy benchmarks and per-token pricing, but may overlook the total computational requirements for real-world tasks. 


AI Agents and the data governance wild west

Today, anyone from an HR director to a marketing intern can quickly build and deploy an AI agent simply using Copilot Studio. This tool is designed to be accessible and quick, making it easy for anyone to play around with and launch a sophisticated agent in no time at all. But when these agents are created outside of the IT department, most users aren’t thinking about data classification or access controls, and they become part of a growing shadow IT problem. ... The problem is that most users will not be thinking like a developer with governance in mind when creating their own agents. Therefore, policies must be imposed to ensure that key security steps aren’t skipped in the rush to deploy a solution. A new layer of data governance must be considered with steps that include configuring data boundaries, restricting who can access what data according to job role and sensitivity level, and clearly specifying which data resources the agent can pull from. AI agents should be built for purpose, using principles of least privilege. This will help avoid a marketing intern having access to the entire company’s HR file. Just like any other business-critical application, it needs to be adequately tested and ‘red-teamed’. Perform penetration testing to identify what data the agent can surface, to who, and how accurate the data is.


Monitoring microservices: Best practices for robust systems

Collecting extensive amounts of telemetry data is most beneficial if you can combine, visualize and examine it successfully. A unified observability stack is paramount. By integrating tools like middleware that work together seamlessly, you create a holistic view of your microservices ecosystem. These unified tools ensure that all your telemetry information — logs, traces and metrics — is correlated and accessible from a single pane of glass, dramatically decreasing the mean time to detect (MTTD) and mean time to resolve (MTTR) problems. The energy lies in seeing the whole photograph, no longer just remote points. ... Collecting information is good, but acting on it is better. Define significant service level objectives (SLOs) that replicate the predicted performance and reliability of your offerings.  ... Monitoring microservices effectively is an ongoing journey that requires a commitment to standardization of data, using the right tools and a proactive mindset. By utilizing standardized observability practices, adapting a unified observability stack, continuously monitoring key metrics, placing meaningful SLOs and allowing enhanced root cause analysis, you may construct a strong and resilient microservices structure that truly serves your business desires and delights your customers. 


How military leadership prepares veterans for cybersecurity success

After dealing with extremely high-pressure environments, in which making the wrong decision can cost lives, veterans and reservists have little trouble dealing with the kinds of risks found in the world of business, such as threats to revenue, brand value and jobs. What’s more, the time-critical mission mindset so essential within the military is highly relevant within cybersecurity, where attacks and breaches must be dealt with confidently, rapidly and calmly. In the armed forces, people often find themselves in situations so intense that Maslow’s hierarchy of needs is flipped on its head. You’re not aiming for self-actualization or more advanced goals, but simply trying to keep the team alive and maintain essential operations. ... Military experience, on the other hand, fosters unparalleled trust, honesty and integrity within teams. Armed forces personnel must communicate really difficult messages. Telling people that many of them may die within hours demands a harsh honesty, but it builds trust. Combine this with an ability to achieve shared goals, and military leaders inspire others to follow them regardless of the obstacles. So veterans bring blunt honesty, communication, and a mission focus to do what is needed to succeed. These are all characteristics that are essential in cybersecurity, where you have to call out critical risks that others might avoid discussing.


Reclaiming the Architect’s Role in the SDLC

Without an active architect guiding the design and implementation, systems can experience architectural drift, a term that describes the gradual divergence from the intended system design, leading to a fragmented and harder-to-manage system. In the absence of architectural oversight, development teams may optimize for individual tasks at the expense of the system’s overall performance, scalability, and maintainability. ... The architect is primarily accountable for the overall design and ensuring the system’s quality, performance, scalability, and adaptability to meet changing demands. However, relying on outdated practices, like manually written and updated design documents, is no longer effective. The modern software landscape, with multiple services, external resources, and off-the-shelf integrations, makes such documents stale almost as soon as they’re written. Consequently, architects must use automated tools to document and monitor live system architectures. These tools can help architects identify potential issues almost in real time, which allows them to proactively address problems and ensure design integrity throughout the development process. These tools are especially useful in the design stage, allowing architects to reclaim the role they once possessed and the responsibilities that come with it.


Is Vibe Coding Ready for Prime Time?

As the vibe coding ecosystem matures, AI coding platforms are rolling out safeguards like dev/prod separation, backups/rollback, single sign-on, and SOC 2 reporting, yet audit logging is still not uniform across tools. But until these enterprise-grade controls become standard, organizations must proactively build their own guardrails to ensure AI-generated code remains secure, scalable and trustworthy. This calls for a risk-based approach, one that adjusts oversight based on the likelihood and impact of potential risks. Not all use cases carry the same weight. Some are low-stakes and well-suited for experimentation, while others may introduce serious security, regulatory or operational risks. By focusing controls where they’re most needed, a risk-based approach helps protect critical systems while still enabling speed and innovation in safer contexts. ... To effectively manage the risks of vibe coding, teams need to ask targeted questions that reflect the unique challenges of AI-generated code. These questions help determine how much oversight is needed and whether vibe coding is appropriate for the task at hand. ... Vibe coding unlocks new ways of thinking for software development. However, it also shifts risk upstream. The speed of code generation doesn’t eliminate the need for review, control and accountability. In fact, it makes those even more important.


7 reasons the SOC is in crisis — and 5 steps to fix it

The problem is that our systems verify accounts, not actual people. Once an attacker assumes a user’s identity through social engineering, they can often operate within normal parameters for extended periods. Most detection systems aren’t sophisticated enough to recognise that John Doe’s account is being used by someone who isn’t actually John Doe. ... In large enterprises with organic system growth, different system owners, legacy environments, and shadow SaaS integrations, misconfigurations are inevitable. No vulnerability scanner will flag identity systems configured inconsistently across domains, cloud services with overly permissive access policies, or network segments that bypass security controls. These misconfigurations often provide attackers with the lateral movement opportunities they need once they’ve gained initial access through compromised credentials. Yet most organizations have no systematic approach to identifying and remediating these architectural weaknesses. ... External SOC providers offer round-the-clock monitoring and specialised expertise, but they lack the organizational context that makes detection effective. They don’t understand your business processes, can’t easily distinguish between legitimate and suspicious activities, and often lack the authority to take decisive action.


One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture

The goal of One Network is to enable uniform policies across services. To do so, we are looking to overcome the complexities of heterogeneous networking, different language runtimes, and the coexistence of monolith services and microservices. These complexities span multiple environments, including public, private, and multi-cloud setups. The idea behind One Network is to simplify the current state of affairs by asking, "Why do I need so many networks? Can I have one network?" ... One Network enables you to manage such a service by applying governance, orchestrating policy, and managing the small independent services. Each of these microservices is imagined as a service endpoint. This enables orchestrating and grouping these service endpoints without application developers needing to modify service implementation, so everything is done on a network. There are three ways to manage these service endpoints. The first is the classic model: you add a load balancer before a workload, such as a shopping cart service running in multiple regions, and that becomes your service endpoint. ... If you start with a flat network but want to create boundaries, you can segment by exposing only certain services and keeping others hidden. 

Daily Tech Digest - August 13, 2025


Quote for the day:

“You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case.” -- Ken Kesey


9 things CISOs need know about the dark web

There’s a growing emphasis on scalability and professionalization, with aggressive promotion and recruitment for ransomware-as-a-service (RaaS) operations. This includes lucrative affiliate programs to attract technically skilled partners and tiered access enabling affiliates to pay for premium tools, zero-day exploits or access to pre-compromised networks. It’s fragmenting into specialized communities that include credential marketplaces, exploit exchanges for zero-days, malware kits, and access to compromised systems, and forums for fraud tools. Initial access brokers (IABs) are thriving, selling entry points into corporate environments, which are then monetized by ransomware affiliates or data extortion groups. Ransomware leak sites showcase attackers’ successes, publishing sample files, threats of full data dumps as well as names and stolen data of victim organizations that refuse to pay. ... While DDoS-for-hire services have existed for years, their scale and popularity are growing. “Many offer free trial tiers, with some offering full-scale attacks with no daily limits, dozens of attack types, and even significant 1 Tbps-level output for a few thousand dollars,” Richard Hummel, cybersecurity researcher and threat intelligence director at Netscout, says. The operations are becoming more professional and many platforms mimic legitimate e-commerce sites displaying user reviews, seller ratings, and dispute resolution systems to build trust among illicit actors.


CMMC Compliance: Far More Than Just an IT Issue

For many years, companies working with the US Department of Defense (DoD) treated regulatory mandates including the Cybersecurity Maturity Model Certification (CMMC) as a matter best left to the IT department. The prevailing belief was that installing the right software and patching vulnerabilities would suffice. Yet, reality tells a different story. Increasingly, audits and assessments reveal that when compliance is seen narrowly as an IT responsibility, significant gaps emerge. In today’s business environment, managing controlled unclassified information (CUI) and federal contract information (FCI) is a shared responsibility across various departments – from human resources and manufacturing to legal and finance. ... For CMMC compliance, there needs to be continuous assurance involving regularly monitoring systems, testing controls and adapting security protocols whenever necessary. ... Businesses are having to rethink much of their approach to security because of CMMC requirements. Rather than treating it as something to be handed off to the IT department, organizations must now commit to a comprehensive, company-wide strategy. Integrating thorough physical security, ongoing training, updated internal policies and steps for continuous assurance mean companies can build a resilient framework that meets today’s regulatory demands and prepares them to rise to challenges on the horizon.


Beyond Burnout: Three Ways to Reduce Frustration in the SOC

For years, we’ve heard how cybersecurity leaders need to get “business smart” and better understand business operations. That is mostly happening, but it’s backwards. What we need is for business leaders to learn cybersecurity, and even further, recognize it as essential to their survival. Security cannot be viewed as some cost center tucked away in a corner; it’s the backbone of your entire operation. It’s also part of an organization’s cyber insurance – the internal insurance. Simply put, cybersecurity is the business, and you absolutely cannot sell without it. ... SOCs face a deluge of alerts, threats, and data that no human team can feasibly process without burning out. While many security professionals remain wary of artificial intelligence, thoughtfully embracing AI offers a path toward sustainable security operations. This isn’t about replacing analysts with technology. It’s about empowering them to do the job they actually signed up for. AI can dramatically reduce toil by automating repetitive tasks, provide rapid insights from vast amounts of data, and help educate junior staff. Instead of spending hours manually reviewing documents, analysts can leverage AI to extract key insights in minutes, allowing them to apply their expertise where it matters most. This shift from mundane processing to meaningful analysis can dramatically improve job satisfaction.


7 legal considerations for mitigating risk in AI implementation

AI systems often rely on large volumes of data, including sensitive personal, financial and business information. Compliance with data privacy laws is critical, as regulations such as the European Union’s General Data Protection Regulation, the California Consumer Privacy Act and other emerging state laws impose strict requirements on the collection, processing, storage and sharing of personal data. ... AI systems can inadvertently perpetuate or amplify biases present in training data, leading to unfair or discriminatory outcomes. This risk is present in any sector, from hiring and promotions to customer engagement and product recommendations. ... The legal framework surrounding AI is evolving rapidly. In the U.S., multiple federal agencies, including the Federal Trade Commission and Equal Employment Opportunity Commission, have signaled they will apply existing laws to AI use cases. AI-specific state laws, including in California and Utah, have taken effect in the last year. ... AI projects involve unique intellectual property questions related to data ownership and IP rights in AI-generated works. ... AI systems can introduce new cybersecurity vulnerabilities, including risks related to data integrity, model manipulation and adversarial attacks. Organizations must prioritize cybersecurity to protect AI assets and maintain trust.


Forrester’s Keys To Taming ‘Jekyll and Hyde’ Disruptive Tech

“Disruptive technologies are a double-edged sword for environmental sustainability, offering both crucial enablers and significant challenges,” explained the 15-page report written by Abhijit Sunil, Paul Miller, Craig Le Clair, Renee Taylor-Huot, Michele Pelino, with Amy DeMartine, Danielle Chittem, and Peter Harrison. “On the positive side,” it continued, “technology innovations accelerate energy and resource efficiency, aid in climate adaptation and risk mitigation, monitor crucial sustainability metrics, and even help in environmental conservation.” “However,” it added, “the necessary compute power, volume of waste, types of materials needed, and scale of implementing these technologies can offset their benefits.” ... “To meet sustainability goals with automation and AI,” he told TechNewsWorld, “one of our recommendations is to develop proofs of concept for ‘stewardship agents’ and explore emerging robotics focused on sustainability.” When planning AI operations, Franklin Manchester, a principal global industry advisor at SAS, an analytics and artificial intelligence software company in Cary, N.C., cautioned, “Not every nut needs to be cracked with a sledgehammer.” “Start with good processes — think lean process mapping, for example — and deploy AI where it makes sense to do so,” he told TechNewsWorld.


5 Key Benefits of Data Governance

Data governance processes establish data ethics, a code of behavior providing a trustworthy business climate and compliance with regulatory requirements. The IAPP calculates that 79% of the world’s population is now protected under privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This statistic highlights the importance of governance frameworks for risk management and customer trust. ... Data governance frameworks recognize data governance roles and responsibilities and streamline processes so that corporate-wide communications can improve. This systematic approach sets up businesses to be more agile, increasing the “freedom to innovate, invest, or hunker down and focus internally,” says O’Neal. For example, Freddie Mac developed a solid data strategy that streamlined data governance communications and later had the level of buy-in for the next iteration. ... With a complete picture of business activities, challenges, and opportunities, data governance creates the flexibility to respond quickly to changing needs. This allows for better self-service business intelligence, where business users can gather multi-structured data from various sources and convert it into actionable intelligence.


Architecture Lessons from Two Digital Transformations

The prevailing mindset was that of “Don’t touch what isn’t broken”. This approach, though seemingly practical, reflected a deeper inertia, rooted in a cash-strapped culture and leadership priorities that often leaned towards prestige over progress. Over the years, the organization had acquired others in an attempt to grow its customer base. These mergers and acquisitions lead to inheritance of a lot more legacy estate. The mess burgeoned to an extent that they needed a transformation, not now, but yesterday! That is exactly where the Enterprise Architecture practice comes into picture. Strategically, a green field approach was suggested. A brand-new system from scratch, that has modern data centers for the infrastructure, cloud platforms for the applications, plug and play architecture or composable architecture as it is better known, for technology, unified yet diversified multi-branding under one umbrella and the whole works. Where things slowly started taking a downhill turn is when they decided to “outsource” the entire development of this new and shiny platform to a vendor. The reasoning was that the organization did not want to diversify from being a banking institution and turn into an IT heavy organization. They sought experienced engineering teams who could hit the ground running and deliver in 2 years flat.


Cloud security in multi-tenant environments

The most useful security strategy in a multi-tenant cloud environment comes from cultivating a security-first culture. It is important to educate the team on the intricacies of the cloud security system, implementing stringent password and authentication policies, thereby promoting secure practices for development. Security teams and company executives may reduce the possible effects of breaches and remain ready for changing threats with the support of event simulations, tabletop exercises, and regular training. ... As we navigate the evolving landscape of enterprise cloud computing, multi-tenant environments will undoubtedly remain a cornerstone of modern IT infrastructure. However, the path forward demands more than just technological adaptation – it requires a fundamental shift in how we approach security in shared spaces. Organizations must embrace a comprehensive defense-in-depth strategy that transcends traditional boundaries, encompassing everything from robust infrastructure hardening to sophisticated application security and meticulous user governance. The future of cloud computing need not present a binary choice between efficiency and security. ... By placing security at the heart of multi-tenant operations, organizations can fully harness the transformative power of cloud technology while protecting their most critical assets 


This Big Data Lesson Applies to AI

Bill Schmarzo was one of the most vocal supporters of the idea that there were no silver bullets, and that successful business transformation was the result of careful planning and a lot of hard work. A decade ago, the “Dean of Big Data” let this publication in on secret recipe he would use to guide his clients. He called it the SAM test, and it allowed business leaders to gauge the viability of new IT projects through three lenses.First, is the new project strategic? That is, will it make a big difference for the company? If it won’t, why are you investing lots of money? Second, is the proposed project actionable? You might be able to get some insight with the new tech, but can your business actually do anything with it? Third, is the project material? The new project might technically be feasible, but if the costs outweigh the benefits, then it’s a failure. Schmarzo, who is currently working as Dell’s Customer AI and Data Innovation Strategist, was also a big proponent of the importance of data governance and data management. The same data governance and data management bugaboos that doomed so many big data projects are, not surprisingly, raising their ugly little heads in the age of AI. Which brings us to the current AI hype wave. We’re told that trillions of dollars are on the line with large language models, that we’re on the cusp of a technological transformation the likes of which we have never seen. 


Sovereign cloud and digital public infrastructure: Building India’s AI backbone

India’s Digital Public Infrastructure (DPI) is an open, interoperable platform that powers essential services like identity and payments. It comprises foundational systems that are accessible, secure, and support seamless integration. In practice, this has taken shape as the famous “India Stack.” ... India’s digital economy is on an exciting trajectory. A large slice of that will be AI-driven services like smart agriculture, precision health, financial inclusion, and more. But to fully capitalize on this opportunity, we need both rich data and trusted compute. DPI provides vast amounts of structured data (financial records, IDs, health info) and access channels. Combining that with a sovereign cloud means we can turn data into insight on Indian soil. Indian regulators now view data itself as a strategic asset and fuel for AI. AI pilots (e.g., local-language advisory bots) are already being built on top of DPI platforms (UPI, ONDC, etc.) to deliver inclusive services. And the government has even subsidized thousands of GPUs for researchers. But all this computing and data must be hosted securely. If our AI models and sensitive datasets live on foreign soil, we remain vulnerable to geopolitical shifts and export controls. ... Now, policy is catching up with sovereignty. In 2023, the new Digital Personal Data Protection (DPDP) Act formally mandated local storage for sensitive personal data. 

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - January 29, 2025


Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Evil Models and Exploits: When AI Becomes the Attacker

A more structured threat emerges with technologies like the Model Context Protocol (MCP). Originally introduced by Anthropic, MCP allows large language models (LLMs) to interact with host machines via JavaScript APIs. This enables LLMs to perform sophisticated operations by controlling local resources and services. While MCP is being embraced by developers for legitimate use cases, such as automation and integration, its darker implications are clear. An MCP-enabled system could orchestrate a range of malicious activities with ease. Think of it as an AI-powered operator capable of executing everything from reconnaissance to exploitation. ... The proliferation of AI models is both a blessing and a curse. Platforms like Hugging Face host over a million models, ranging from state-of-the-art neural networks to poorly designed or maliciously altered versions. Amid this abundance lies a growing concern: model provenance. Imagine a widely used model, fine-tuned by a seemingly reputable maintainer, turning out to be a tool of a state actor. Subtle modifications in the training data set or architecture could embed biases, vulnerabilities or backdoors. These “evil models” could then be distributed as trusted resources, only to be weaponized later. This risk underscores the need for robust mechanisms to verify the origins and integrity of AI models.


The tipping point for Generative AI in banking

Advancements in AI are allowing banks and other fintechs to embed the technology across their entire value chain. For example, TBC is leveraging AI to make 42% of all payment reminder calls to customers with loans that are up to 30 days or less overdue and is getting ready to launch other AI-enabled solutions. Customers normally cannot differentiate the AI calls powered by our tech from calls by humans, even as the AI calls are ten times more efficient for TBC’s bottom line, compared with human operator calls. Klarna rolled out an AI assistant, which handled 2.3 million conversations in its first month of operation, which accounts for two-thirds of Klarna’s customer service chats or the workload of 700 full-time agents, the company estimated. Deutsche Bank leverages generative AI for software creation and managing adverse media, while the European neobank Bunq applies it to detect fraud. Even smaller regional players, provided they have the right tech talent in place, will soon be able to deploy Gen AI at scale and incorporate the latest innovations into their operations. Next year is set to be a watershed year when this step change will create a clear division in the banking sector between AI-enabled champions and other players that will soon start lagging behind. 


Want to be an effective cybersecurity leader? Learn to excel at change management

Security should never be an afterthought; the change management process shouldn’t be, either, says Michael Monday, a managing director in the security and privacy practice at global consulting firm Protiviti. “The change management process should start early, before changing out the technology or process,” he says. “There should be some messages going out to those who are going to be impacted letting them know, [otherwise] users will be surprised, they won’t know what’s going on, business will push back and there will be confusion.” ... “It’s often the CISO who now has to push these new things,” says Moyle, a former CISO, founding partner of the firm SecurityCurve, and a member of the Emerging Trends Working Group with the professional association ISACA. In his experience, Moyle says he has seen some workers more willing to change than others and learned to enlist those workers as allies to help him achieve his goals. ... When it comes to the people portion, she tells CISOs to “feed supporters and manage detractors.” As for process, “identify the key players for the security program and understand their perspective. There are influencers, budget holders, visionaries, and other stakeholders — each of which needs to be heard, and persuaded, especially if they’re a detractor.”


Preparing financial institutions for the next generation of cyber threats

Collaboration between financial institutions, government agencies, and other sectors is crucial in combating next-generation threats. This cooperative approach enhances the ability to detect, respond to, and mitigate sophisticated threats more effectively. Visa regularly works with international agencies of all sizes to bring cybercriminals to justice. In fact, Visa regularly works alongside law enforcement, including the US Department of Justice, FBI, Secret Service and Europol, to help identify and apprehend fraudsters and other criminals. Visa uses its AI and ML capabilities to identify patterns of fraud and cybercrime and works with law enforcement to find these bad actors and bring them to justice. ... Financial institutions face distinct vulnerabilities compared to other industries, particularly due to their role in critical infrastructure and financial ecosystems. As high-value targets, they manage large sums of money and sensitive information, making them prime targets for cybercriminals. Their operations involve complex and interconnected systems, often including legacy technologies and numerous third-party vendors, which can create security gaps. Regulatory and compliance challenges add another layer of complexity, requiring stringent data protection measures to avoid hefty fines and maintain customer trust.


Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme. However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization. ... Deepfake technology continues to blur the lines between reality and fiction. ... Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC. To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification.


The Product Model and Agile

First, the product model is not new; it’s been out there for more than 20 years. So I have never argued that the product model is “the next new thing,” as I think that’s not true. Strong product companies have been following the product model for decades, but most companies around the world have only recently been exposed to this model, which is why so many people think of it as new. Second, while I know this irritates many people, today there are very different definitions of what it even means to be “Agile.” Some people consider SAFe as Agile. If that’s what you consider Agile, then I would say that Agile plays no part in the product model, as SAFe is pretty much the antithesis of the product model. This difference is often characterized today as “fake Agile” versus “real Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none of the Agile ceremonies, yet you are consistently doing continuous deployment, then at least as far as I’m concerned, you’re running “real Agile.” Third, we should separate the principles of Agile from the various, mostly project management, processes that have been set up around those principles. ... Finally, it’s also important to point out that there is one Agile principle that might be good enough for custom or contract software work, but is not sufficient for commercial product work. This is the principle that “working software is the primary measure of progress.”


Next Generation Observability: An Architectural Introduction

It's always a challenge when creating architectural content, trying to capture real-world stories into a generic enough format to be useful without revealing any organization's confidential implementation details. We are basing these architectures on common customer adoption patterns. That's very different from most of the traditional marketing activities usually associated with generating content for the sole purpose of positioning products for solutions. When you're basing the content on actual execution in solution delivery, you're cutting out the marketing chuff. This observability architecture provides us with a way to map a solution using open-source technologies focusing on the integrations, structures, and interactions that have proven to work at scale. Where those might fail us at scale, we will provide other options. What's not included are vendor stories, which are normal in most marketing content. Those stories that, when it gets down to implementation crunch time, might not fully deliver on their promises. Let's look at the next-generation observability architecture and explore its value in helping our solution designs. The first step is always to clearly define what we are focusing on when we talk about the next-generation observability architecture.


AI SOC Analysts: Propelling SecOps into the future

Traditional, manual SOC processes already struggling to keep pace with existing threats are far outpaced by automated, AI-powered attacks. Adversaries are using AI to launch sophisticated and targeted attacks putting additional pressure on SOC teams. To defend effectively, organizations need AI solutions that can rapidly sort signals from noise and respond in real time. AI-generated phishing emails are now so realistic that users are more likely to engage with them, leaving analysts to untangle the aftermath—deciphering user actions and gauging exposure risk, often with incomplete context. ... The future of security operations lies in seamless collaboration between human expertise and AI efficiency. This synergy doesn't replace analysts but enhances their capabilities, enabling teams to operate more strategically. As threats grow in complexity and volume, this partnership ensures SOCs can stay agile, proactive, and effective. ... Triaging and investigating alerts has long been a manual, time-consuming process that strains SOC teams and increases risk. Prophet Security changes that. By leveraging cutting-edge AI, large language models, and advanced agent-based architectures, Prophet AI SOC Analyst automatically triages and investigates every alert with unmatched speed and accuracy.


Apple researchers reveal the secret sauce behind DeepSeek AI

The ability to use only some of the total parameters of a large language model and shut off the rest is an example of sparsity. That sparsity can have a major impact on how big or small the computing budget is for an AI model. AI researchers at Apple, in a report out last week, explain nicely how DeepSeek and similar approaches use sparsity to get better results for a given amount of computing power. Apple has no connection to DeepSeek, but Apple does its own AI research on a regular basis, and so the developments of outside companies such as DeepSeek are part of Apple's continued involvement in the AI research field, broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv pre-print server, lead author Samir Abnar of Apple and other Apple researchers, along with collaborator Harshay Shah of MIT, studied how performance varied as they exploited sparsity by turning off parts of the neural net. ... Abnar and team ask whether there's an "optimal" level for sparsity in DeepSeek and similar models, meaning, for a given amount of computing power, is there an optimal number of those neural weights to turn on or off? It turns out you can fully quantify sparsity as the percentage of all the neural weights you can shut down, with that percentage approaching but never equaling 100% of the neural net being "inactive."


What Data Literacy Looks Like in 2025

“The foundation of data literacy lies in having a basic understanding of data. Non-technical people need to master the basic concepts, terms, and types of data, and understand how data is collected and processed,” says Li. “Meanwhile, data literacy should also include familiarity with data analysis tools. ... “Organizations should also avoid the misconception that fostering GenAI literacy alone will help developing GenAI solutions. For this, companies need even greater investments in expert AI talent -- data scientists, machine learning engineers, data engineers, developers and AI engineers,” says Carlsson. “While GenAI literacy empowers individuals across the workforce, building transformative AI capabilities requires skilled teams to design, fine-tune and operationalize these solutions. Companies must address both.” ... “Data literacy in 2025 can’t just be about enabling employees to work with data. It needs to be about empowering them to drive real business value,” says Jain. “That’s how organizations will turn data into dollars and ensure their investments in technology and training actually pay off.” ... “Organizations can embed data literacy into daily operations and culture by making data-driven thinking a core part of every role,” says Choudhary.

Daily Tech Digest - December 29, 2024

AI agents may lead the next wave of cyberattacks

“Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.” Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image. ... “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said. Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.


How businesses can ensure cloud uptime over the holidays

To ensure uptime during the holidays, best practice should include conducting pre-holiday stress tests to identify system vulnerabilities and configure autoscaling to handle demand surges. Experts also recommend simulating failures through chaos engineering to expose weaknesses. Redundancy across regions or availability zones is essential, as is a well-documented incident response plan – with clear escalation paths – “as this allows a team to address problems quickly even with reduced staffing,” says VimalRaj Sampathkumar, technical head – UKI at software company ManageEngine. It’s all about understanding the business requirements and what your demand is going to look like, says Luan Hughes, chief information officer (CIO) at tech provider Telent, as this will vary from industry to industry. “When we talk about preparedness, we talk a lot about critical incident management and what happens when big things occur, but I think you need to have an appreciation of what your triggers are,” she says. ... It’s also important to focus on your people as much as your systems, she adds, noting that it’s imperative to understand your management processes, out-of-hours and on-call rota and how you action support if problems do arise.


Tech worker movements grow as threats of RTO, AI loom

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies. ... Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don't make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had "even a slight positive impact on productivity." But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.


Navigating the cloud and AI landscape with a practical approach

When it comes to AI or genAI, just like everyone else, we started with use cases that we can control. These include content generation, sentiment analysis and related areas. As we explored these use cases and gained understanding, we started to dabble in other areas. For example, we have an exciting use case for cleaning up our data that leverages genAI as well as non-generative machine learning to help us identify inaccurate product descriptions or incorrect classifications and then clean them up and regenerate accurate, standardized descriptions. ... While this might be driving internal productivity, you also must think of it this way: As a distributor, at any one time, we deal with millions of parts. Our supplier partners keep sending us their price books, spec sheets and product information every quarter. So, having a group of people trying to go through all that data to find inaccuracies is a daunting, almost impossible, task. But with AI and genAI capabilities, we can clean up any inaccuracies far more quickly than humans could. Sometimes within as little as 24 hours. That helps us improve our ability to convert and drive business through an improved experience for our customers.


When the System Fights Back: A Journey into Chaos Engineering

Enter chaos engineering — the art of deliberately creating disaster to build stronger systems. I’d read about Netflix’s Chaos Monkey, a tool designed to randomly kill servers in production, and I couldn’t help but admire the audacity. What if we could turn our system into a fighter — one that could take a punch and still come out swinging? ... Chaos engineering taught me more than I expected. It’s not just a technical exercise; it’s a mindset. It’s about questioning assumptions, confronting fears, and embracing failure as a teacher. We integrated chaos experiments into our CI/CD pipeline, turning them into regular tests. Post-mortems became celebrations of what we’d learned, rather than finger-pointing sessions. And our systems? Stronger than ever. But chaos engineering isn’t just about the tech. It’s about the culture you build around it. It’s about teaching your team to think like detectives, to dig into logs and metrics with curiosity instead of dread. It’s about laughing at the absurdity of breaking things on purpose and marveling at how much you learn when you do. So here’s my challenge to you: embrace the chaos. Whether you’re running a small app or a massive platform, the principles hold true. 


Enhancing Your Company’s DevEx With CI/CD Strategies

CI/CD pipelines are key to an engineering organization’s efficiency, used by up to 75% of software companies with developers interacting with them daily. However, these CI/CD pipelines are often far from being the ideal tool to work with. A recent survey found that only 14% of practitioners go from code to production in less than a day when high-performing teams should be able to deploy multiple times a day. ... Merging, building, deploying and running are all classic steps of a CI/CD pipeline, often handled by multiple tools. Some organizations have SREs that handle these functions, but not all developers are that lucky! In that case, if a developer wants to push code where a pipeline isn’t set up — which can be quite recurring with the rise of microservices — they must assemble those rarely-used tools. However, this will disturb the flow state you wish your developers to remain in. ... Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. Consequently, developers frequently rely on DevOps engineers — often understaffed — to diagnose problems, leading to slow feedback loops.


How to Architect Software for a Greener Future

Code efficiency is something that the platforms and the languages should make easy for us. They should do the work, because that's their area of expertise, and we should just write code. Yes, of course, write efficient code, but it's not a silver bullet. What about data center efficiency, then? Surely, if we just made our data center hyper efficient, we wouldn't have to worry. We could just leave this problem to someone else. ... It requires you to do some thinking. It also requires you to orchestrate this in some type of way. One way to do this is autoscaling. Let's talk about autoscaling. We have the same chart here but we have added demand. Autoscaling is the simple concept that when you have more demand, you use more resources and you have a bigger box, virtual machine, for example. The key here is very easy to do the first thing. We like to do this, "I think demand is going to go up, provision more, have more space. Yes, I feel safe. I feel secure now". Going the other way is a little scarier. It's actually just as important as compared to sustainability. Otherwise, we end up in the first scenario where we are incorrectly sized for our resource use. Of course, this is a good tool to use if you have a variability in demand. 


Tech Trends 2025 shines a light on the automation paradox – R&D World

The surge in AI workloads has prompted enterprises to invest in powerful GPUs and next-generation chips, reinventing data centers as strategic resources. ... As organizations race to tap progressively more sophisticated AI systems, hardware decisions once again become integral to resilience, efficiency and growth, while leading to more capable “edge” deployments closer to humans and not just machines. As Tech Trends 2025 noted, “personal computers embedded with AI chips are poised to supercharge knowledge workers by providing access to offline AI models while future-proofing technology infrastructure, reducing cloud computing costs, and enhancing data privacy.” ... Data is the bedrock of effective AI, which is why “bad inputs lead to worse outputs—in other words, garbage in, garbage squared,” as Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report observes. Fully 75% of surveyed organizations have stepped up data-life-cycle investments because of AI. Layer a well-designed data framework beneath AI, and you might see near-magic; rely on half-baked or biased data, and you risk chaos. As a case in point, Vancouver-based LIFT Impact Partners fine-tuned its AI assistants on focused, domain-specific data to help Canadian immigrants process paperwork—a far cry from scraping the open internet and hoping for the best.


What Happens to Relicensed Open Source Projects and Their Forks?

Several companies have relicensed their open source projects in the past few years, so the CHAOSS project decided to look at how an open source project’s organizational dynamics evolve after relicensing, both within the original project and its fork. Our research compares and contrasts data from three case studies of projects that were forked after relicensing: Elasticsearch with fork OpenSearch, Redis with fork Valkey, and Terraform with fork OpenTofu. These relicensed projects and their forks represent three scenarios that shed light on this topic in slightly different ways. ... OpenSearch was forked from Elasticsearch on April 12, 2021, under the Apache 2.0 license, by the Amazon Web Services (AWS) team so that it could continue to offer this service to its customers. OpenSearch was owned by Amazon until September 16, 2024, when it transferred the project to the Linux Foundation. ... OpenTofu was forked from Terraform on Aug. 25, 2023, by a group of users as a Linux Foundation project under the MPL 2.0. These users were starting from scratch with the codebase since no contributors to the OpenTofu repository had previously contributed to Terraform.


Setting up a Security Operations Center (SOC) for Small Businesses

In today's digital age, security is not an option for any business irrespective of its size. Small Businesses equally face increasing cyber threats, making it essential to have robust security measures in place. A SOC is a dedicated team responsible for monitoring, detecting, and responding to cybersecurity incidents in real-time. It acts as the frontline defense against cyber threats, helping to safeguard your business's data, reputation, and operations. By establishing a SOC, you can proactively address security risks and enhance your overall cybersecurity posture. The cost of setting up a SOC for a small business may be prohibitive, in which case, the businesses may look at engaging Managed Service Providers for the whole or part of the services. ... Establishing clear, well-defined processes is vital for the smooth functioning of your SOC. NIST Cyber Security Framework could be a good fit for all businesses and one can define the processes that are essential and relevant considering the size, threat landscape and risk tolerance of the business. ... Continuous training and development are essential for keeping your SOC team prepared to handle evolving threats. Offer regular training sessions, certifications, and workshops to enhance their skills and knowledge. 



Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis

Daily Tech Digest - December 28, 2024

Forcing the SOC to change its approach to detection

Make no mistake, we are not talking about the application of AI in the usual sense when it comes to threat detection. Up until now, AI has seen Large Language Models (LLMs) used to do little more than summarise findings for reporting purposes in incident response. Instead, we are referring to the application of AI in its truer and broader sense, i.e. via machine learning, agents, graphs, hypergraphs and other approaches – and these promise to make detection both more precise and intelligible. Hypergraphs gives us the power to connect hundreds of observations together to form likely chains of events. ... The end result is that the security analyst is no longer perpetually caught in firefighting mode. Rather than having to respond to hundreds of alerts a day, the analyst can use the hypergraphs and AI to detect and string together long chains of alerts that share commonalities and in so doing gain a complete picture of the threat. Realistically, it’s expected that adopting such an approach should see alert volumes decline by up to 90 per cent. But it doesn’t end there. By applying machine learning to the chains of events it will be possible to prioritise response, identifying which threats require immediate triage. 


Sole Source vs. Single Source Vendor Management

A Sole source is a vendor that provides a specific product or service to your company. This vendor makes a specific widget or service that is custom tailored to your company’s needs. If there is an event at this Sole Source provider, your company can only wait until the event has been resolved. There is no other vendor that can produce your product or service quickly. They are the sole source, on a critical path to your operations. From an oversight and assessment perspective, this can be a difficult relationship to mitigate risks to your company. With sole source companies, we as practitioners must do a deeper dive into these companies from a risk assessment perspective. From a vendor audit perspective, we need to go into more details of how robust their business continuity, disaster recovery, and crisis management programs are. ... Single Source providers are vendors that provide a service or product to your company that is one company that you choose to do business with, but there are other providers that could provide the same product or services. An example of a single source provider is a payment processing company. There are many to choose from, but you chose one specific company to do business with. Moving to a new single source provider can be a daunting task that involves a new RFP process, process integration, assessments of their business continuity program, etc. 


Central Africa needs traction on financial inclusion to advance economic growth

Beyond the infrastructure, financial inclusion would see a leap forward in CEMAC if the right policies and platforms exist. “The number two thing is that you have to have the right policies in place which are going to establish what would constitute acceptable identity authentication for identity transactions. So, be it for onboarding or identity transactions, you have to have a policy. Saying that we’re going to do biometric authentication for every transaction, no matter what value it is and what context it is, doesn’t make any sense,” Atick holds. “You have to have a policy that is basically a risk-based policy. And we have lots of experience in that. Some countries started with their own policies, and over time, they started to understand it. Luckily, there is a lot of knowledge now that we can share on this point. This is why we’re doing the Financial Inclusion Symposium at the ID4Africa Annual General Meeting next year [in Addis Ababa], because these countries are going to share their knowledge and experiences.” “The symposium at the AGM will basically be on digital identity and finance. It’s going to focus on the stages of financial inclusion, and what are the risk-based policies countries must put in place to achieve the desired outcome, which is a low-cost, high-robustness and trustworthy ecosystem that enables anybody to enter the system and to conduct transactions securely.”


2025 Data Outlook: Strategic Insights for the Road Ahead

By embracing localised data processing, companies can turn compliance into an advantage, driving innovations such as data barter markets and sovereignty-specific data products. Data sovereignty isn’t merely a regulatory checkbox—it’s about Citizen Data Rights. With most consumer data being unstructured and often ignored, organisations can no longer afford complacency. Prioritising unstructured data management will be crucial as personal information needs to be identified, cataloged, and protected at a granular level from inception through intelligent, policy-based automation. ... Individuals are gaining more control over their personal information and expect transparency, control, and digital trust from organisations. As a result, businesses will shift to self-service data management, enabling data stewards across departments to actively participate in privacy practices. This evolution moves privacy management out of IT silos, embedding it into daily operations across the organisation. Organisations that embrace this change will implement a “Data Democracy by Design” approach, incorporating self-service privacy dashboards, personalised data management workflows, and Role-Based Access Control (RBAC) for data stewards. 


Defining & Defying Cybersecurity Staff Burnout

According to the van Dam article, burnout happens when an employee buries their experience of chronic stress for years. The people who burn out are often formerly great performers, perfectionists who exhibit perseverance. But if the person perseveres in a situation where they don't have control, they can experience the kind of morale-killing stress that, left unaddressed for months and years, leads to burnout. In such cases, "perseverance is not adaptive anymore and individuals should shift to other coping strategies like asking for social support and reflecting on one's situation and feelings," the article read. ... Employees sometimes scoff at the wellness programs companies put out as an attempt to keep people healthy. "Most 'corporate' solutions — use this app! attend this webinar! — felt juvenile and unhelpful," Eden says. And it does seem like many solutions fall into the same quick-fix category as home improvement hacks or dump dinner recipes. Christina Maslach's scholarly work attributed work stress to six main sources: workload, values, reward, control, fairness, and community. An even quicker assessment is promised by the Matches Measure from Cindy Muir Zapata. 


Revolutionizing Cloud Security for Future Threats

Is it possible that embracing Non-Human Identities can help us bridge the resource gap in cybersecurity? The answer is a definite yes. The cybersecurity field is chronically understaffed and for firms to successfully safeguard their digital assets, they must be equipped to handle an infinite number of parallel tasks. This demands a new breed of solutions such as NHIs and Secrets Security Management that offer automation at a scale hitherto unseen. NHIs have the potential to take over tedious tasks like secret rotation, identity lifecycle management, and security compliance management. By automating these tasks, NHIs free up the cybersecurity workforce to concentrate on more strategic initiatives, thereby improving the overall efficiency of your security operations. Moreover, through AI-enhanced NHI Management platforms, we can provide better insights into system vulnerabilities and usage patterns, considerably improving context-aware security. Can the concept of Non-Human Identities extend its relevance beyond the IT sector? ... From healthcare institutions safeguarding sensitive patient data, financial services firms securing transactional data, travel companies protecting customer data, to DevOps teams looking to maintain the integrity of their codebases, the strategic relevance of NHIs is widespread.


Digital Transformation: Making Information Work for You

Digital transformation is changing the organization from one state to another through the use of electronic devices that leverage information. Oftentimes, this entails process improvement and process reengineering to convert business interactions from human-to-human to human-to-computer-to-human. By introducing the element of the computer into human-to-human transactions, there is a digital breadcrumb left behind. This digital record of the transaction is important in making digital transformations successful and is the key to how analytics can enable more successful digital transformations. In a human-to-human interaction, information is transferred from one party to another, but it generally stops there. With the introduction of the digital element in the middle, the data is captured, stored, and available for analysis, dissemination, and amplification. This is where data analytics shines. If an organization stops with data storage, they are missing the lion’s share of the potential value of a digital transformation initiative. Organizations that focus only on collecting data from all their transactions and sinking this into a data lake often find that their efforts are in vain. They end up with a data swamp where data goes to die and never fully realize its potential value. 


Secure and Simplify SD-Branch Networks

The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... To reduce complexity and appliance sprawl, SD-Branch consolidates networking and security capabilities into a single solution that provides seamless protection of distributed environments. It covers all critical branch edges, from the WAN edge to the branch access layer to a full spectrum of endpoint devices. 


Breaking up is hard to do: Chunking in RAG applications

The most basic is to chunk text into fixed sizes. This works for fairly homogenous datasets that use content of similar formats and sizes, like news articles or blog posts. It’s the cheapest method in terms of the amount of compute you’ll need, but it doesn’t take into account the context of the content that you’re chunking. That might not matter for your use case, but it might end up mattering a lot. You could also use random chunk sizes if your dataset is a non-homogenous collection of multiple document types. This approach can potentially capture a wider variety of semantic contexts and topics without relying on the conventions of any given document type. Random chunks are a gamble, though, as you might end up breaking content across sentences and paragraphs, leading to meaningless chunks of text. For both of these types, you can apply the chunking method over sliding windows; that is, instead of starting new chunks at the end of the previous chunk, new chunks overlap the content of the previous one and contain part of it. This can better capture the context around the edges of each chunk and increase the semantic relevance of your overall system. The tradeoff is that it requires greater storage requirements and can store redundant information.


What is quantum supremacy?

A definitive achievement of quantum supremacy will require either a significant reduction in quantum hardware's error rates or a better theoretical understanding of what kind of noise classical approaches can exploit to help simulate the behavior of error-prone quantum computers, Fefferman said. But this back-and-forth between quantum and classical approaches is helping push the field forwards, he added, creating a virtuous cycle that is helping quantum hardware developers understand where they need to improve. "Because of this cycle, the experiments have improved dramatically," Fefferman said. "And as a theorist coming up with these classical algorithms, I hope that eventually, I'm not able to do it anymore." While it's uncertain whether quantum supremacy has already been reached, it's clear that we are on the cusp of it, Benjamin said. But it's important to remember that reaching this milestone would be a largely academic and symbolic achievement, as the problems being tackled are of no practical use. "We're at that threshold, roughly speaking, but it isn't an interesting threshold, because on the other side of it, nothing magic happens," Benjamin said. ... That's why many in the field are refocusing their efforts on a new goal: demonstrating "quantum utility," or the ability to show a significant speedup over classical computers on a practically useful problem.


Shift left security — Good intentions, poor execution, and ways to fix it

One of the first steps is changing the way security is integrated into development. Instead of focusing on a “gotcha”, after-the-fact approach, we need security to assist us as early as possible in the process: as we write the code. By guiding us as we’re still in ‘work-in-progress’ mode with our code, security can adopt a positive coaching and helping stance, nudging us to correct issues before they become problems and go clutter our backlog. ... The security tools we use need to catch vulnerabilities early enough so that nobody circles back to fix boomerang issues later. Very much in line with my previous point, detecting and fixing vulnerabilities as we code saves time and preserves focus. This also reduces the back-and-forth in peer reviews, making the entire process smoother and more efficient. By embedding security more deeply into the development workflow, we can address security issues without disrupting productivity. ... When it comes to security training, we need a more focused approach. Developers don’t need to become experts in every aspect of code security, but we do need to be equipped with the knowledge that’s directly relevant to the work we’re doing, when we’re doing it — as we code. Instead of broad, one-size-fits-all training programs, let’s focus on addressing specific knowledge gaps we personally have. 



Quote for the day:

“Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.” -- Vaibhav Shah