Showing posts with label threat modeling. Show all posts
Showing posts with label threat modeling. Show all posts

Daily Tech Digest - December 20, 2025


Quote for the day:

"The bad news is time flies, The good news is you're the pilot." -- Elizabeth McCormick



Europe’s AI Challenge Runs Deeper Than Regulation

European firms may welcome a lessening of their regulatory burden. But Europe's problem isn’t merely regulatory drag. There's the structural gulf between what modern AI development requires and what Europe currently has the capacity to deliver. The Omnibus, helpful as it may be for legal alignment, cannot close those gaps. ... Europe has only a handful of companies, such as Aleph Alpha and Mistral, developing large-scale generative AI models domestically. Even these firms face steep structural disadvantages. A European Commission analysis has warned that such companies "require massive investment to avoid losing the race to U.S. competitors," while acknowledging that European capital markets "do not meet this need, forcing European firms to seek funding abroad." The result is a persistent leakage of ownership, control and strategic direction at precisely the moment scale matters most. ... This capital asymmetry produces powerful second-order effects. It determines who can absorb the high costs of large-scale model training, sustain loss-leading platform expansion and iterate continuously at the frontier of AI development. Over time, these dynamics create self-reinforcing structural advantages for capital-rich ecosystems. Advantages compound over time and remain largely beyond the corrective reach of regulation. These gaps are not regulatory problems. 


How to Pivot When Digital Ambitions Crash into Operational Realities

Transformation usually begins with ambition. Leaders imagine a future where the bank operates more efficiently and interacts with customers the way modern platforms do. But the more I speak with people running these programs, the more I see that banks are trying to build the future without fully understanding the present. They push forward with new digital products, new interfaces, new journeys, while the actual work happening across branches, operations centers and back offices remains something of a mystery, even to the teams responsible for changing it. ... what’s less widely discussed is that banks do not fail because change is impossible, they do because too much of the real work remains invisible. Many institutions still rely on assumptions about how processes run, assumptions based on documentation that no longer reflects reality. And when a transformation is built on assumptions, the project begins to drift. What banks need is an honest picture of their operational baseline. Once leaders see how their organization works today (not how it was designed years ago and not how it is described in flowcharts) the conversation changes. Priorities become clearer. Bottlenecks reveal themselves. Entire categories of work turn out to be more manual than anyone expected. And what looked like a technology problem often turns out to be a process problem that has been accumulating for years.


Six Lessons Learned Building RAG Systems in Production

Something ships quickly, the demo looks fine, leadership is satisfied. Then real users start asking real questions. The answers are vague. Sometimes wrong. Occasionally confident and completely nonsensical. That’s usually the end of it. Trust disappears fast, and once users decide a system can’t be trusted, they don’t keep checking back to see if it has improved and will not give it a second chance. They simply stop using it. In this case, the real failure is not technical but it’s human one. People will tolerate slow tools and clunky interfaces. What they won’t tolerate is being misled. When a system gives you the wrong answer with confidence, it feels deceptive. Recovering from that, even after months of work, is extremely hard. ... Many teams rush their RAG development, and to be honest, a simple MVP can be achieved very quickly if we aren’t focused on performance. But RAG is not a quick prototype; it’s a huge infrastructure project. The moment you start stressing your system with real evolving data in production, the weaknesses in your pipeline will begin to surface. ... When we talk about data preparation, we’re not just talking about clean data; we’re talking about meaningful context. That brings us to chunking. Chunking refers to breaking down a source document, perhaps a PDF or internal document, into smaller chunks before encoding it into vector form and storing it within a database.


Enterprise reactions to cloud and internet outages

Those in the c-suite, not surprisingly, “examined” or “explored” or “assessed” their companies’ vulnerability to cloud and internet problems after the news. So what did they find? Are enterprises fleeing the cloud they now see as risky instead of protective? ... All the enterprises thought the dire comments they’d read about cloud abandonment were exaggerations, or reflected an incomplete understanding of the cloud and alternatives to cloud dependence. And the internet? “What’s our alternative there?” one executive asked me. ... The enterprise experts pointed out that the network piece of this cake had special challenges. Its critical to keep the two other layers separated, at least to ensure that nothing from the user-facing layer could see the resource layer, which of course would be supporting other applications and, in the case of the cloud, other companies. It’s also critical in exposing the features of the cloud to customers. The network layer, of course, includes the Domain Name Server (DNS) system that converts our familiar URLs to actual IP addresses for traffic routing; it’s the system that played a key role in the AWS problem, and as I’ve noted, it’s run by a different team. ... Enterprises don’t see the notion of a combined team or an overlay, every-layer team, as the solution. None of the enterprises had a view of what would be needed to fix the internet, and only a quarter of even the virtualization experts express an opinion on what the answer is for the cloud. 


Offering more AI tools can't guarantee better adoption -- so what can?

After multiple years of relentless hype around AI and its promises, it's no surprise that companies have high expectations for their AI investments. But the measurable results have left a lot to be desired, with studies repeatedly showing most organizations aren't seeing the ROI they'd hoped for; in a Deloitte research report from October, only 10% of 1,854 respondents using agentic AI said they were realizing significant ROI on that investment, despite 85% increasing their spend on AI over the last 12 months. ... At face value, it seems obvious that the IT leadership team should be responsible for all things AI, since it is a technical product deployed at scale. In practice, this approach creates unnecessary hurdles to effective adoption, isolating technical decision-making from daily department workflows. And since many AI deployments are focused on equipping the workforce with new capabilities, excluding the human resources department is likely to constrain the effort. ... "If you focus on the tool, it's going to become procedural," Weed-Schertzer warned. "'Here's how to log in. This is your account.'" While technically useful, she added that she sees the biggest rewards coming from training employees on specific applications and having managers demonstrate the utility of an AI program for their teams, so that workers have a clear model from which to work. Seeing the utility is what will prompt long-term adoption, as opposed to a demo of basic tool functionality.


Why Cybersecurity Awareness Month Should Include Personal Privacy

Cybersecurity awareness campaigns tend to focus on email hygiene, secure logins, and network defense. These are key, but the boundary between internal threats and external exposure isn’t clear. An executive’s phone number leaked on a data broker’s site can become the first step in a targeted spear-phishing attack. A social media post about a trip can tip off a burglar. Forward-thinking entities know this. They tie personal privacy to enterprise risk. They integrate privacy checks into executive protection, threat monitoring, and insider-risk programs. Employees’ digital identities are treated as part of the attack surface. ... Removing data from your social profiles is only half the fight. The real struggle lives in data broker databases. These brokers compile, package, and resell personal data (addresses, phone numbers, demographics), feeding dozens of downstream systems. Together, they extend your reach into places you never directly visited. Most individuals never see their names there, never ask for removal, and never know about the pathways. Because every broker has its own rules, opt-outs require patience and effort. One broker demands forms, another wants ID, and a third ignores requests entirely. ... Awareness without action fades. However, when employees internalize privacy practices, they extend protection during their off hours and weekends. That’s when bad actors strike, during perceived downtime.


How CIOs can break free from reactive IT

Invisible IT is emerging as a practical way for CIOs to minimize disruption and improve the performance of the digital workplace. At its simplest, it’s an approach that prevents many issues from becoming problems in the first place, reducing the need for users to raise tickets or wait for help. As ecosystems scale, the gap between what organizations expect and what legacy workflows can deliver continues to widen. Lenovo’s latest research highlights invisible IT as a strategic shift toward proactive, personalized support that strengthens the performance of the digital workplace. ... In a workplace where devices, applications and services operate across different locations and conditions, this approach leaves CIOs without the early signals needed to prevent interruption. Faults often emerge gradually through performance drift or configuration inconsistencies, but traditional workflows only respond once the impact is visible to users. ... Invisible IT draws on AI to interpret device health, behavioral patterns and performance signals across the organization, giving CIOs earlier awareness of degradation and emerging risks. ... Invisible IT gives CIOs a clearer path to shaping a digital workplace that strengthens productivity and resilience by design. By shifting from user-reported issues to signal-driven insight, CIOs gain earlier visibility into risks and greater control over how disruptions are managed.


AI isn’t one system, and your threat model shouldn’t be either

The right way to partition a modern AI stack for threat modeling is not to treat “AI systems” as a monolithic risk category, we should return to security fundamentals and segment the stack by what the system does, how it is used, the sensitivity of the data it touches, and the impact its failure or breach could have. This distinguishes low risk internal productivity tools from models embedded in mission critical workflows or those representing core intellectual property and ensures AI is evaluated in context rather than by label. ... Threat modeling is a driver of higher quality that extends beyond security, and the best way to convey this to business leaders is through analogies rooted in their own domain. For example, in a car dealership, no one would allow a new salesperson to sign off on an 80 percent discount. The general manager instantly understands why that safeguard exists because it protects revenue, reputation, and operational stability. ... Tool calling patterns are one key area to incorporate into threat modeling. Most modern LLM implementations rely on external tool calls, such as web search or internal MCPs (some server side, and some client side). Unless these are tightly defined and constrained, they can drive the model to behave in unexpected or partially malicious ways. Changes in the frequency, sequence, or parameters of tool calls can indicate misuse, model confusion, or an attempted escalation path.


The Convergence Challenge: Architecture, Risk, and the Urgency for Assurance

If there was a single topic that drew the sharpest concern, it was the way organizations are adopting AI. Hayes described AI as a new threat vector that many companies have rushed into without architectural planning or governance. In his view, the industry is creating a new category of debt that may exceed what already exists in legacy systems. “AI is being adopted haphazardly in many organizations,” Hayes said. Marketing teams connect tools to mail systems. Staff paste corporate content into public models. Guardrails are light or nonexistent. In many cases no one has defined how to test models, how to check for poisoning, or how to verify that outputs remain reliable over time. Hayes argued that the field has done a poor job securing software in general, and is now repeating the same mistakes with AI, only faster. The difference is that AI systems can act and adapt at a pace human attackers cannot match. Swanson added that boards and senior leaders still struggle with their role in major technology shifts. They do not want to manage details, but they are responsible for strategy and oversight. With AI, as with earlier changes, many boards have not yet decided how to oversee investments that fundamentally reshape business operations. Ominski put a fine point on it. “We are moving into risks we have not fully imagined,” he said. “The pace alone forces us to rethink how we govern technology.”


AI Coding Agents and Domain-Specific Languages: Challenges and Practical Mitigation Strategies

DSLs are deliberately narrow, domain-targeted languages with unique syntax rules, semantics, and execution models. They often have little representation in public datasets, evolve quickly, and include concepts that resemble no mainstream programming language. For these reasons, DSLs expose the fundamental weaknesses of large language models when used as code generators. ... Many DSLs, especially new ones, lack mature Language Server Protocol (LSP) support, which provide syntax and error highlighting in the code editor. Without structured domain data for Copilot to query, the model cannot check its guesses against a canonical schema. ... Because the problem stems from missing knowledge and structure, the solution is to supply knowledge and impose structure. Copilot’s extensibility, particularly Custom Agents, project-level instruction files, and Model Context Protocol (MCP) make this possible. ... Structure matters: AI systems chunk documentation for retrieval. Keep related information proximate – constraints mentioned three paragraphs after a concept may never appear in the same retrieval context. Each section should be self-contained with necessary context included. ... AI coding agents are powerful, but they are pattern-driven tools. DSLs, by definition, lack the broad pattern exposure that enables LLMs to behave reliably.

Daily Tech Digest - June 12, 2025


Quote for the day:

"It takes a lot of courage to show your dreams to someone else." -- Erma Bombeck


Tech Burnout: CIOs Might Be Making It Worse

“CIOs often unintentionally worsen burnout by underestimating the human toll of constant context switching, unclear priorities, and always-on availability. In the rush to stay competitive with AI-driven initiatives, teams are pushed to deliver faster without enough buffer for testing, reflection, or recovery,” Marceles adds. In the end, it’s the panic surrounding AI adoption, and not the technology itself, that’s accelerating burnout. The panic is running hot and high, surpassing anything CIOs and IT members think of as normal. “The pressure to adopt AI everywhere is real, and CIOs are feeling it from every angle -- executives, investors, competitors. But when that pressure gets passed down as back-to-back initiatives with no breathing room, it fractures the team. Engineers get pulled into AI pilots without proper training. IT staff are asked to maintain legacy systems while onboarding new automation tools. And all of it happens under the expectation that this is just “the new normal,” says Cahyo Subroto, founder of MrScraper, a data scraping tool. ... “What gets lost is the human capacity behind the tech. We don’t talk enough about how context-switching and unclear priorities drain cognitive energy. When everything is labeled critical, people lose the ability to focus. Productivity drops. Morale sinks. And burnout sets in quietly, until key people start leaving,” Subroto says.


Asset sprawl, siloed data and CloudQuery’s search for unified cloud governance

“The biggest challenge with existing tools is that they’re siloed — one for security, one for cost, one for asset inventory — making it hard to get a unified view across domains,” CQ founder Yevgeny Pats told VentureBeat. “Even simple questions like ‘What EBS volume is attached to an EC2 that is turned off? are hard to answer without stitching together multiple tools.” ... Taking a developer-first approach is critical, said Pats, because developers are ultimately the ones building, operating and securing today’s cloud infrastructure. Still, many cloud visibility tools were built for top-down governance, not for the people actually in the trenches. “When you put developers first, with accessible data, flexible APIs and native language like SQL, you empower them to move faster, catch issues earlier and build more securely,” he said. Customers are finding ways to use CloudQuery beyond asset inventory. ... “Having a fully serverless solution was an important requirement,” Hexagon cloud governance and FinOps expert Peter Figueiredo and CloudQuery director of engineering Herman Schaaf wrote in a blog post. “This decision brought lots of benefits since there is no need for time-consuming updates and virtually zero maintenance.”


Digital twins combine with AI to help manage complex systems

And it’s not just AI making digital twins better. The digital twins can also make for better AI. “We’re using digital twins to actually generate information for large language models,” says PwC’s Likens, adding that the synthetic data is of better quality when it comes from a digital twin. “We see opportunity to have the digital twins generate the missing pieces of data we need, and it’s more in line with the environment because it’s based on actual data.” A digital twin is a working model of a system, says Gareth Smith, GM of software test automation at Keysight Technologies, an electronics company. “It’ll respond in a way that mimics the expected response of the physical system.” ... Another potential use case for digital twins that might become more relevant this year is to help with understanding and scaling agentic AI systems. Agentic AI allows companies to automate complex business processes, such as solving customer problems, creating proposals, or designing, building, and testing software. The agentic AI system can be composed of multiple data sources, tools, and AI agents, all interacting in non-deterministic ways. That can be extremely powerful, but extremely dangerous. So a digital twin can monitor the behavior of an agentic system to ensure it doesn’t go off the rails, and test and simulate how the system will react to novel situations.


Will Quantum Computing Kill Bitcoin?

If a technological advance were to render these assets insecure, the consequences could be severe. Cryptocurrencies function by ensuring that only authorized parties can modify the blockchain ledger. In Bitcoin’s case, this means that only someone with the correct private key can spend a given amount of Bitcoin. ... Quantum computers, however, operate on different principles. Thanks to phenomena like superposition and entanglement, they can perform many calculations in parallel. In 1994, mathematician Peter Shor developed a quantum algorithm capable of factoring large numbers exponentially faster than classical methods. ... Could quantum computing kill Bitcoin? In theory, yes, if Bitcoin failed to adapt and quantum computers suddenly became powerful enough to break its encryption, its value would plummet. But this scenario assumes crypto stands still while quantum computing advances, which is highly unlikely. The cryptographic community is already preparing, and the financial incentives to preserve the integrity of Bitcoin are enormous. Moreover, if quantum computers become capable of breaking current encryption methods, the consequences would extend far beyond Bitcoin. Secure communications, financial transactions, digital identities, and national security all depend on encryption. In such a world, the collapse of Bitcoin would be just one of many crises.


Smaller organizations nearing cybersecurity breaking point

Small and medium enteprises (SMEs) that do have budget to hire specialists often struggle to attract and retain skilled professionals due to the lack of variation in the role. Burnout is also a growing issue for the understaffed, underqualified IT teams common in small business. “With limited resource in the business, employees are often wearing multiple hats and the pressure to manage cybersecurity on top of their regular duties can lead to fatigue, missed threats, and higher turnover,” Exelby says. ... SMEs often mistakenly believe that cyber attackers only target larger organizations, but that’s often not the case — particularly because small business partners of larger companies are often deliberately targeted as part of supply chain attacks. “Threats are becoming more advanced but their resources aren’t keeping pace,” says Kristian Torode, director and co-founder of Crystaline, a specialist in SME cybersecurity. “Many SMEs are still relying on outdated systems or don’t have dedicated security teams in place, making them an easy target.” Torode adds: “They’re also seen by cybercriminals as an exploitable link in the supply chain, since they often work with larger enterprises.” “SMEs have traditionally been low-hanging fruit — with limited resources for cybersecurity training, advanced tools, or dedicated security teams,” Adam Casey, director of cybersecurity and CISO at cloud security firm Qodea, tells CSO. 


Want fewer security fires to fight? Start with threat modeling

Some CISOs begin with one critical system or pilot project. From there, they build templates, training materials, and internal champions who help scale the practice across teams. Incorporating threat modeling into an organization’s development lifecycle doesn’t have to be daunting. In fact, it shouldn’t be, according to David Kellerman, Field CTO of Cymulate. “The key is to start small and make threat modeling approachable,” Kellerman says. Rather than rolling out a heavyweight process full of complex methodologies, CISOs should look for ways to embed threat modeling into workflows that teams already use. “I advise CISOs to embed threat modeling into existing workflows, such as architecture reviews, design discussions, or sprint planning, rather than creating separate, burdensome exercises.” This lightweight, integrated approach not only reduces resistance but helps normalize secure thinking within engineering culture. “Use simple frameworks like STRIDE or basic attacker storyboarding that non-security engineers can easily grasp,” Kellerman explains. “Make it collaborative and educational, not punitive.” As teams gain familiarity and confidence, organizations can gradually evolve their threat modeling capabilities. “The goal isn’t to build a perfect threat model on day one,” Kellerman says. “It’s to establish a security mindset that grows naturally within engineering culture.”


Rethinking Success in Security: Why Climbing the Corporate Ladder Isn’t Always the Goal

In the security field, like in many other fields, there seems to be constant pressure to advance. For whatever reason, the choice to climb the corporate ladder seems to garner far more reverence and respect than the choice to develop expertise and skills in one particular area of specialization. In other words, the decision to go higher and broader seems to be lauded more than the decision to go deeper and more focused. Yet, both are important in their own right. There are certain times in a security professional’s career when they find themselves at a crossroads – confronted by this issue. One career path is not more “correct” than another one. Which direction is the right one is an individual choice where many factors are relevant. ... It is the sad reality of the security field that we don’t show our respect and appreciation for our colleagues enough. That being said, the respect is there. See, one important thing to keep in mind is that respect is earned – not ordained or otherwise granted. If you are a great security professional, people take notice. You shouldn’t feel compelled to attain a specific title, paygrade, or otherwise just to get some respect. The dirty secret in the industry is that just because someone is in a higher-level role, it doesn’t mean that people respect them. 


The AI data center boom: Strategies for sustainable growth and risk management

Data center developers are experiencing extended long lead times for critical equipment such as generators, switchgear, power distribution units (PDUs) and cooling systems. Global shortages in semiconductors and electrical components are still impacting timelines. Additionally, uncertainty regarding tariffs is further complicating procurement and planning processes, as potential changes in trade policies could affect the cost and availability of these essential components. ... Data center owners are increasingly trying to use low-carbon materials to decarbonize both the centers and construction operations. This approach includes concrete that permanently traps carbon dioxide and steel, which is powered using renewable energy. Microsoft is now building its first data centers made with structural mass timber to slash the use of steel and concrete, which are among the most significant sources of carbon emissions. ... Fires in data centers are typically caused by a breakdown of machinery, plant or equipment. A fire that spreads quickly can result in significant financial losses and business interruption. While the structures for data centers often have concrete frames that are not significantly impacted by fires, it’s the high-value equipment that drives losses – from cooling technology to high-end computer servers or graphic card components.


Managing software projects is a double-edged sword

Doing two platform shifts in six months was beyond challenging—it was absurd. We couldn’t have hacked together a half-baked version for even one platform in that time. It was flat-out impossible. Let’s just say I was quite unhappy with this request. It was completely unreasonable. My team of developers was being asked to work evenings and weekends on a task that was guaranteed to fail. The subtle implication that we were being rebellious and dishonest was difficult to swallow. So I set about making my position clear. I tried to stay level-headed, but I’m sure that my irritation showed through. I fought hard to protect my team from a pointless death march—my time in the Navy had taught me that taking care of the team was my top job. My protestations were met with little sympathy. My boss, who like me came from the software development tool company, certainly knew that the request was unreasonable, but he told me that while it was a challenge, we just needed to “try.” This, of course, was the seed of my demise. I knew it was an impossible task, and that “trying” would fail. How do you ask your team to embark on a task that you know will fail miserably and that they know will fail miserably? Well, I answered that question very poorly.


The CIO Has Evolved. It's Time the Board Catches Up

Across industries, CIOs have risen to meet the moment. They are at the helm of transformation strategies with business peers and drive digital revenue models. They even partner with CFOs to measure value, CMOs to reimagine customer experience and COOs to build data-driven models. ... CIOs have evolved. But if boards continue to treat them as back-room managers instead of strategic partners, they are underutilizing one of the strategic roles in the enterprise. ... In today's times, every company is a technology company. AI, automation, cloud and digital platforms aren't just enablers. They form the foundation for competitive advantage and new revenue models. Similarly, cybersecurity is no longer just an IT challenge, it's a board-level fiduciary responsibility. Boards, however, dominantly engage with CIOs in a transactional manner. Issues such as budget approvals, risk reviews and project updates are common conversations. CIOs are rarely invited into conversations related to growth strategy, market reinvention or long-term capital allocation. This disconnect is proving to be a strategic liability. ... In industries where technology is the differentiator, CIOs should not be in the boardroom, they should be shaping their agenda. Because if CIOs are empowered to lead, organizations don't just avoid risk, they build resilience, relevance and reinvention.

Daily Tech Digest - September 18, 2024

Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Along similar lines, threat modeling can help meet obligations defined in contracts if those contracts include terms related to risk identification and management. ... Beyond obligations linked to compliance and contracts, many businesses also establish internal IT security goals. For example, they might seek to configure access controls based on the principle of least privilege or enforce zero-trust policies on their networks. Threat modeling can help to put these policies into practice by allowing organizations to identify where their risks actually lie. From this perspective, threat modeling is a practice that the IT organization can embrace because it helps achieve larger goals – namely, those related to internal governance and security strategy.


How Cloud Custodian conquered cloud resource management

Everybody knows the cloud bill is basically rate multiplied by usage. But while most enterprises have a handle on rate, usage is the hard part. You have different application teams provisioning infrastructure. You go through code reviews. Then when you get to five to 10 applications, you get past the point where anyone can possibly know all the components. Now you have containerized workloads on top of more complex microservices architectures. And you want to be able to allow a combination of cathedral (control) and bazaar (freedom of technology choice) governance, especially today with AI and all of the new frameworks and LLMs [large language models]. At a certain point you lose the script to be able to follow all of this in your head. There are a lot of tools to enable that understanding — architectural views, network service maps, monitoring tools — all feeling out different parts of the elephant versus giving an organization a holistic view. They need to know not only what’s in their cloud environment, but what’s being used, what’s conforming to policy, and what needs to be fixed, and how. That’s what Cloud Custodian is for — so you can define the organizational requirements of your applications and map those up against cloud resources as policy.


5 Steps to Identify and Address Incident Response Gaps

To compress the time it takes to address an incident, it’s not enough to stick to the traditional eyes-on-glass model that network operations centers (NOCs) traditionally privilege. It’s too human-intensive and error-prone to effectively triage an increasingly overwhelming volume of data. To go from event to resolution with minimal toil and increased speed, teams can leverage AI and automation to deflect noise, surface only the most critical alerts and automate diagnostics and remediations. Generative AI can amplify that effect: For teams collaborating in ChatOps tools, common diagnostic questions can be used as prompts to get context and accelerate action. ... When an incident hits, teams spend too much time gathering information and looping in numerous people to tackle it. Generative AI can be used to quickly summarize key data about the incident and provide actionable insights at every step of the incident life cycle. It can also supercharge the ability to develop and deploy automation jobs faster, even by non-technical teams: Operators can translate conversational prompts into proposed runbook automation or leverage pre-engineered prompts based on common categories.


DevOps with OpenShift Pipelines and OpenShift GitOps

Unlike some other CI solutions, such as legacy tool Jenkins, Pipelines is built on native Kubernetes technologies and thus is resource efficient since pipelines and tasks are only actively running when needed. Once the pipeline has completed no resources are consumed by the pipeline itself. Pipelines and tasks are constructed using a declarative approach following standard Kubernetes practices. However, OpenShift Pipelines includes a user-friendly interface built into the OpenShift console that enables users to easily monitor the execution of the pipelines and view task logs as needed. The user interface also shows metrics for individual task execution, enabling users to better optimize pipeline performance. In addition, the user interface enables users to quickly create and modify pipelines visually. While users are encouraged to store tasks and Pipeline resources in Git, the ability to visually create and modify pipelines greatly reduces the learning curve and makes the technology approachable for new users. You can leverage pipelines-as-code to provide an experience that is tightly integrated with your backend Git provider, such as GitHub or GitLab. 


Rethinking enterprise architects’ roles for agile transformation

Mounting technical debt and extending the life of legacy systems are key risks CIOs should be paranoid about. The question is, how should CIOs assign ownership to this problem, require that technical debt’s risks are categorized, and ensure there’s a roadmap for implementing remediations? One solution is to assign the responsibility to enterprise architects in a product management capacity. Product managers must define a vision statement that aligns with strategic and end-user needs, propose prioritized roadmaps, and oversee an agile backlog for agile delivery teams. ... Enterprise architects who have a software development background are ideal candidates to assume the delivery leader role and can steer teams toward developing platforms with baked-in security, performance, usability, and other best practices. ... Enterprise architects assuming a sponsorship role in these initiatives can help steer them toward force-multiplying transformations that reduce risks and provide additional benefits in improved experiences and better decision-making. CIOs who want enterprise architects to act as sponsors should provide them with a budget and oversee the development of a charter for managing investment priorities.


The best way to regulate AI might be not to specifically regulate AI. This is why

Most of the potential uses of AI are already covered by existing rules and regulations designed to do things such as protect consumers, protect privacy and outlaw discrimination. These laws are far from perfect, but where they are not perfect the best approach is to fix or extend them rather than introduce special extra rules for AI. AI can certainly raise challenges for the laws we have – for example, by making it easier to mislead consumers or to apply algorithms that help businesses to collude on prices. ... Finally, there’s a lot to be said for becoming an international “regulation taker”. Other jurisdictions such as the European Union are leading the way in designing AI-specific regulations. Product developers worldwide, including those in Australia, will need to meet those new rules if they want to access the EU and those other big markets. If Australia developed its own idiosyncratic AI-specific rules, developers might ignore our relatively small market and go elsewhere. This means that, in those limited situations where AI-specific regulation is needed, the starting point ought to be the overseas rules that already exist. There’s an advantage in being a late or last mover. 


How LLMs on the Edge Could Help Solve the AI Data Center Problem

Anyone interacting with an LLM in the cloud is potentially exposing the organization to privacy questions and the potential for a cybersecurity breach. As more queries and prompts are being done outside the enterprise, there are going to be questions about who has access to that data. After all, users are asking AI systems all sorts of questions about their health, finances, and businesses. To do so, these users often enter personally identifiable information (PII), sensitive healthcare data, customer information, or even corporate secrets. The move toward smaller LLMs that can either be contained within the enterprise data center – and thus not running in the cloud – or that can run on local devices is a way to bypass many of the ongoing security and privacy concerns posed by broad usage of LLMs such as ChatGPT. ... Pruning the models to reach a more manageable number of parameters is one obvious way to make them more feasible on the edge. Further, developers are shifting the GenAI model from the GPU to the CPU, reducing the processing footprint, and building standards for compiling. As well as the smartphone applications noted above, the use cases that lead the way will be those that are achievable despite limited connectivity and bandwidth, according to Goetz.


'Good complexity' can make hospital networks more cybersecure

Because complicated systems have structures, Tanriverdi says, it's difficult but feasible to predict and control what they'll do. That's not feasible for complex systems, with their unstructured connections. Tanriverdi found that as health care systems got more complex, they became more vulnerable. ... The problem, he says, is that such systems offer more data transfer points for hackers to attack, and more opportunities for human users to make security errors. He found similar vulnerabilities with other forms of complexity, including:Many different types of medical services handling health data. Decentralizing strategic decisions to member hospitals instead of making them at the corporate center. The researchers also proposed a solution: building enterprise-wide data governance platforms, such as centralized data warehouses, to manage data sharing among diverse systems. Such platforms would convert dissimilar data types into common ones, structure data flows, and standardize security configurations. "They would transform a complex system into a complicated system," he says. By simplifying the system, they would further lower its level of complication.


Threats by Remote Execution and Activating Sleeper Devices in the Context of IoT and Connected Devices

As the Internet of Things proliferates, the number of connected devices in both civilian and military contexts is increasing exponentially. From smart homes to military-grade equipment, the IoT ecosystem connects billions of devices, all of which can potentially be exploited by adversaries. The pagers in the Hezbollah case, though low-tech compared to modern IoT devices, represent the vulnerability of a system where devices are remotely controllable. In the IoT realm, the stakes are even higher, as everyday devices like smart thermostats, security cameras, and industrial equipment are interconnected and potentially exploitable. In a modern context, this vulnerability could be magnified when applied to smart cities, critical infrastructure, and defense systems. If devices such as power grids, water systems, or transportation networks are connected to the internet, they could be subjected to remote control by malicious actors. ... One of the most alarming aspects of this situation is the suspected infiltration of the supply chain. The pagers used by Hezbollah were reportedly tampered with before being delivered to the group, likely with explosives embedded within the devices.


Detecting vulnerable code in software dependencies is more complex than it seems

A “phantom dependency” refers to a package used in your code that isn’t declared in the manifest. This concept is not unique to any one language (it’s common in JavaScript, NodeJS, and Python). This is problematic because you can’t secure what you can’t see. Traditional SCA solutions focus on manifest files to identify all application dependencies, but those can both be under- or over-representative of the dependencies actually used by the application. They can be under-representative if the analysis starts from a manifest file that only contains a subset of dependencies, e.g., when additional dependencies are installed in a manual, scripted or dynamic fashion. This can happen in Python ML/AI applications, for example, where the choice of packages and versions often depend on operating systems or hardware architectures, which cannot be fully expressed by dependency constraints in manifest files. And they are over-representative if they contain dependencies not actually used. This happens, for example, if you dump the names of all the components contained in a bloated runtime environment into a manifest file



Quote for the day:

"An accountant makes you aware but a leader makes you accountable." -- Henry Cloud

Daily Tech Digest - August 14, 2024

MIT releases comprehensive database of AI risks

While numerous organizations and researchers have recognized the importance of addressing AI risks, efforts to document and classify these risks have been largely uncoordinated, leading to a fragmented landscape of conflicting classification systems. ... The AI Risk Repository is designed to be a practical resource for organizations in different sectors. For organizations developing or deploying AI systems, the repository serves as a valuable checklist for risk assessment and mitigation. “Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management,” the researchers write. “The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks.” ... The research team acknowledges that while the repository offers a comprehensive foundation, organizations will need to tailor their risk assessment and mitigation strategies to their specific contexts. However, having a centralized and well-structured repository like this reduces the likelihood of overlooking critical risks.


Why Agile Alone Might Not Be So Agile: A Witty Look at Methodology Madness

Agile’s problems often start with a fundamental misunderstanding of what it truly means to be agile. When the Agile Manifesto was penned back in 2001, its authors intended it to be a flexible, adaptable approach to software development, free from the rigid structures and bureaucratic procedures of traditional methodologies. But fast forward to today, and Agile has become its own kind of bureaucratic monster in many organizations — a tyrant disguised as a liberator. Why does this happen? Let’s dissect the two main problems: the roles defined within Agile and the one-size-fits-all mentality that organizations apply to Agile methodology. One of the biggest hurdles to successful Agile adoption is the disconnect between the executive suite and the teams on the ground. Executives often see Agile as a magic bullet for faster delivery and higher productivity, without fully understanding the nuances of the methodology. This disconnect can lead to unrealistic demands and pressure on teams to deliver more with each Sprint, which in turn leads to burnout and decreased quality. Moreover, the Agile Manifesto’s disdain for comprehensive documentation can be problematic in complex projects. 


Feature Flags Wouldn’t Have Prevented the CrowdStrike Outage

Feature flagging is a valuable technique for decoupling the release of new features from code deployment, and advanced feature flagging tools usually support percentage-based rollouts. For example, you can enable a feature on X% of targets to ensure it works before reaching 100%. While it’s true that feature flags can help to prevent outages, given the scale and complexity of the CrowdStrike incident, they would not have been sufficient for three reasons. First, a comprehensive staged rollout requires more than just “gradually enable this flag over the next few days”:There has to be an integration with the monitoring stack to perform health checks and stop the rollout if there are problems. There has to be a way to integrate with the CD pipeline to reuse the list of targets to roll out to and a list of health checks to track. Available feature flagging solutions require much work and expertise to support staged rollout at any reasonable scale. Second, CrowdStrike’s config had a complex structure requiring a “configuration system” and a “content interpreter.” Such configs would benefit from first-class schema support and end-to-end type safety. 


Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Threat modeling helps us create reusable artifacts and reference patterns as code, which serve as blueprints for future projects. These patterns encapsulate best practices and lessons learned, ensuring that security considerations are consistently applied across all projects. By embedding these reference patterns into development processes, organizations reduce the need to reinvent the wheel for each new product, saving time and resources. ... The existence of well-defined reference patterns reduces the likelihood of errors during development. Developers can rely on these patterns as a guide, ensuring that they follow proven security practices without having to start from scratch. 


The magic of RAG is in the retrieval

The role of the LLM in a RAG system is to simply summarize the data from the retrieval model’s search results, with prompt engineering and fine-tuning to ensure the tone and style are appropriate for the specific workflow. All the leading LLMs on the market support these capabilities, and the differences between them are marginal when it comes to RAG. Choose an LLM quickly and focus on data and retrieval. RAG failures primarily stem from insufficient attention to data access, quality, and retrieval processes. For instance, merely inputting large volumes of data into an LLM with an expansive context window is inadequate if the data is excessively noisy or irrelevant to the specific task. Poor outcomes can result from various factors: a lack of pertinent information in the source corpus, excessive noise, ineffective data processing, or the retrieval system’s inability to filter out irrelevant information. These issues lead to low-quality data being fed to the LLM for summarization, resulting in vague or junk responses. It’s important to note that this isn’t a failure of the RAG concept itself. Rather, it’s a failure in constructing an appropriate “R” — the retrieval model.


What enterprises say the CrowdStrike outage really teaches

CrowdStrike made two errors, enterprises say. First, CrowdStrike didn’t account for the sensitivity of its Falcon client software for endpoints to the tabular data that described how to look for security issues. As a result, an update to that data crashed the client by introducing a condition that had existed before but hadn’t been properly tested. Second, rather than doing a limited release of the new data file that would almost certainly have caught the problem and limited its impact, CrowdStrike pushed it out to its entire user base. ... The 37 who didn’t hold Microsoft accountable pointed out that security software necessarily has a unique ability to interact with the Windows kernel software, and this means it can create a major problem if there’s an error. But while enterprises aren’t convinced that Microsoft contributed to the problem, over three-quarters think Microsoft could contribute to reducing the risk of a recurrence. Nearly as many said that they believed Windows was more prone to the kind of problem CrowdStrike’s bug created, and that view was held by 80 of the 89 development managers, many of whom said that Apple’s MacOS or Linux didn’t pose the same risk and that neither was impacted by the problem.


MIT researchers use large language models to flag problems in complex systems

The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline. While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model. “Since this is just the first iteration, we didn’t expect to get there from the first go, but these results show that there’s an opportunity here to leverage LLMs for complex anomaly detection tasks,” says Sarah Alnegheimish, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on SigLLM.


Cybersecurity should return to reality and ditch the hype

This shift from educational content to marketing blurs the line between genuine security insights and commercial interests, leading organizations to invest in solutions that may not address their unique challenges. Additionally, buzzword-driven content has become rampant, where terms like “zero-trust architecture” or “blockchain for security” are frequently mentioned in passing without delving into the practicalities and limitations of these technologies. ... we must first recognize the critical distinction between genuine cybersecurity work and the broader tech-centric content that often overshadows it. Real cybersecurity practice is anchored in a relentless pursuit to understand and mitigate the ever-evolving threats to our systems. It is a discipline that demands deep, continuously updated knowledge of systems, networks, and human behavior, alongside a steadfast commitment to the principles of confidentiality, integrity, and availability. True cybersecurity practitioners are those who engage in the laborious tasks of vulnerability assessment, threat modeling, incident response, and the continuous enhancement of security postures, often without the allure of viral recognition or simplistic solutions.


Harnessing AI for 6G: Six Key Approaches for Technology Leaders

Leaders must understand the enabling technologies behind 6G, such as terahertz and quantum communication, and the transformative potential of AI in network deployment and management. ... Engaging with international bodies like the ITU to contribute to the standardization process is crucial. This will ensure AI technologies are integrated into network designs from the beginning. Early involvement in these discussions will also help technology leaders to anticipate future developments and prepare strategies accordingly. ... Advocating for an AI-native 6G network involves embedding large language models and other AI technology into network equipment. This strategy allows autonomous operations and optimizes network management through machine learning algorithms. Such a proactive approach will streamline operations and enhance the reliability and efficiency of the network infrastructure. ... Emphasize the convergence of computing and communication and develop user-centric services that leverage 6G and AI to improve user experiences across various industries. Leaders should focus on creating solutions that are not only technologically advanced but also address the practical needs and preferences of end-users.


GenAI compliance is an oxymoron. Ways to make the best of it

Confoundingly, genAI software sometimes does things that neither the enterprise nor the AI vendor told it to do. Whether that’s making things up (a.k.a. hallucinating), observing patterns no one asked it to look for, or digging up nuggets of highly sensitive data, it spells nightmares for CIOs. This is especially true when it comes to regulations around data collection and protection. How can CIOs accurately and completely tell customers what data is being collected about them and how it is being used when the CIO often doesn’t know exactly what a genAI tool is doing? What if the licensed genAI algorithm chooses to share some of that ultra-sensitive data with its AI vendor parent? “With genAI, the CIO is consciously taking an enormous risk, whether that is legal risk or privacy policy risks. It could result in a variety of outcomes that are unpredictable,” said Tony Fernandes, founder and CEO of user experience agency UEGroup. “If a person chooses not to disclose race, for example, but an AI is able to infer it and the company starts marketing on that basis, have they violated the privacy policy? That’s a big question that will probably need to be settled in court,” he said.



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch