Daily Tech Digest - September 27, 2023

CISOs are struggling to get cybersecurity budgets: Report

"Across industries, the decline in budget growth was most prominent in tech firms, which dropped from 30% to 5% growth YoY," IANS said in a report on the study. "More than a third of organizations froze or cut their cybersecurity budgets." Budget growth was the lowest in sectors that are relatively mature in cybersecurity, such as retail, tech, finance, and healthcare, added the report. ... Of the CISOs whose companies did increase cybersecurity budgets, 80% indicated extreme circumstances, such as a security incident or a major industry disruption, drove the budget increase. While companies impacted by a cybersecurity breach added 18% to their budget on average, other industry disruptions contributed to a 27% budget boost. "I think there has always been a component of security spending that is forced to be reactive: be it incidents, updated regulatory or vendor controls or shifting business priorities," Steffen said. "To some degree, technology spending in general has always been like this, and will always likely be this way."

Lifelong Machine Learning: Machines Teaching Other Machines

Lifelong learning is a relatively new field in machine learning, where AI agents are learning continually as they come across new tasks. The goal of LL is for agents to acquire new knowledge of novel tasks, without forgetting how to perform previous tasks. This approach is different from the typical “train-then-deploy” machine learning, where agents cannot learn progressively without “catastrophic interference” (also called catastrophic forgetting) happening in future tasks, where the AI abruptly and drastically forgets previously learned information upon learning new information. According to the team, their work represents a potentially new direction in the field of lifelong machine learning, as current work in LL involves getting a single AI agent to learn tasks one step at a time in a sequential way. In contrast, SKILL involves a multitude of AI agents all learning at the same time in a parallel way, thus significantly accelerating the learning process. The team’s findings demonstrate when SKILL is used, the amount of time that is required to learn all 102 tasks is reduced by a factor of 101.5 

Is Your Organization Vulnerable to Shadow AI?

Perhaps the biggest danger associated with unaddressed shadow AI is that sensitive enterprise data could fall into the wrong hands. This poses a significant risk to privacy and confidentiality, cautions Larry Kinkaid a consulting manager at BARR Advisory, a cybersecurity and compliance solutions provider. “The data could be used to train AI models that are commingled, or worse, public, giving bad actors access to sensitive information that could be used to compromise your company’s network or services.” There could also be serious financial repercussions if the data is subject to legal, statutory, or regulatory protections, he adds. Organizations dedicated to responsible AI deployment and use follow strong, explainable, ethical, and auditable practices, Zoldi says. “Together, such practices form the basis for a responsible AI governance framework.” Shadow AI occurs out of sight and beyond AI governance guardrails. When used to make decisions or impact business processes, it usually doesn’t meet even basic governance standards. “Such AI is ungoverned, which could make its use unethical, unstable, and unsafe, creating unknown risks,” he warns.

Been there, doing that: How corporate and investment banks are tackling gen AI

In new product development, banks are using gen AI to accelerate software delivery using so-called code assistants. These tools can help with code translation (for example, .NET to Java), and bug detection and repair. They can also improve legacy code, rewriting it to make it more readable and testable; they can also document the results. Plenty of financial institutions could benefit. Exchanges and information providers, payments companies, and hedge funds regularly release code; in our experience, these heavy users could cut time to market in half for many code releases. For many banks that have long been pondering an overhaul of their technology stack, the new speed and productivity afforded by gen AI means the economics have changed. Consider securities services, where low margins have meant that legacy technology has been more neglected than loved; now, tech stack upgrades could be in the cards. Even in critical domains such as clearing systems, gen AI could yield significant reductions in time and rework efforts.

Microsoft’s data centers are going nuclear

The software giant is already working with at least one third-party nuclear energy provider in an effort to reduce its carbon footprint. The ad, though, signals an effort to make nuclear energy an important part of its energy strategy. The posting said that the new nuclear expert “will maintain a clear and adaptable roadmap for the technology’s integration,” and have “experience in the energy industry and a deep understanding of nuclear technologies and regulatory affairs.” Microsoft has made no public statement on the specific goals of its nuclear energy program, but the obvious possibility — particularly in the wake of its third-party nuclear enegry deal — is a concern for environmental issues. Although nuclear power has long been plagued by serious concerns about its safety and role in nuclear weapons proliferation, the rapidly worsening climate situation makes it a comparatively attractive alternative to fossil fuels, given the relatively large amount of energy it that can be generated without producing atmospheric emissions.

The pitfalls of neglecting security ownership at the design stage

Without clear ownership of security during the design stage, many problems can quickly arise. Security should never be an afterthought, or a ‘bolted on’ mechanism after a product is created. Development teams primarily focus on creating functional and efficient software and hardware, whereas security teams specialize in identifying and mitigating potential risks. Without collaboration, or more ideally integration between the two, security may be overlooked or not adequately addressed, leaving a heightened risk for cyber vulnerabilities. A good example is a privacy shutter for cameras in laptop computers. Ever see a sticky note on someone’s PC covering the camera? A design team may focus on the quality and placement of the camera as primarily factors for the user experience. However, security professionals know that many users want a physical solution to guarantee cameras cannot capture images if they don’t want to, and on/off indicating lights are not good enough.

Enterprise Architecture Must Adapt for Continuous Business Change

Continuous business change is an agile enterprise mindset that begins with the realization that change is constant and that business needs to be organized to support this continual change. This change is delivered as a constant flow of activity directed by distributed teams and democratized processes. It is orchestrated by the transparency of information and includes automated monitoring and workflows. This continuous business change requires EA, as a discipline, to evolve to match the new mindset. Change processes need to be adapted and updated to deliver faster time to value and quicker iteration of business ideas. These adaptations require the democratization of design, away from a traditional centralized approach, to allow for a quicker and more efficient change process. These change processes recognize autonomous business areas that deliver their own change. One example of this is moving away from being project-focused to being product-focused. Product-based companies organize their teams around autonomous products which may also be known as value streams or bounded domains.

A history of online payment security

Google was the first site to use two-factor authentication. They made it so that those requesting access were required to have not only a password, but access to the phone number used when creating the account. Since then, many companies have taken this system to the next level by providing their users with a multitude of ways to ensure the security of their online payments. They have implemented multiple ways to ensure the safety of their clients’ transactions, including password security, a six digit PIN, account security tokens and SMS validation. Other than a DNA match, you can’t get much more verified than this. Privacy and confidentiality of information, especially when it concerns financial data, is detrimental to customer satisfaction. There are millions of financial transactions done online on a daily basis involving payments to online shopping websites or merchant stores, bill payments or bank transactions. Security of cashless transactions done on a virtual platform requires an element of bankability and trust that can only be generated from the best and most reputable brands and leaders in the industry.

Rediscovering the value of information

In the corporate sector, the value destroyed by poor information management practices is often measured in fines and lawsuit payouts. But before such catastrophes come to light, what metrics do we use — or should we use — to determine whether a publicly traded company has their information management house in order? Who manages information more effectively — P&G or Unilever; Coke or Pepsi; GM or Ford; McDonald’s or Chipotle; Marriot or Hilton? When interviewing a potential new hire, how should we ascertain whether they are a skilled and responsible information manager? Business historians tell us that it was about 10 years before the turn of the century that “information” — previously thought to be a universal “good thing” — started being perceived as a problem. About 20 years after the invention of the personal computer, the general population started to feel overwhelmed by the amount of information being generated. We thrive on information, we depend on information, and yet we can also choke on it. We have available to us more information than one person could ever hope to process.

Software Delivery Enablement, Not Developer Productivity

Software delivery enablement and 2023’s trend of platform engineering won’t succeed by focusing solely on people and technology. At most companies, processes need an overhaul too. A team has “either a domain that they’re working in or they have a piece of functionality that they have to deliver,” she said. “Are they working together to deliver that thing? And, if not, what do we have to do to improve that?” Developer enablement should be concentrated at the team outcome level, says Daugherty, which can be positively influenced by four key capabilities: Continuous integration and continuous delivery (CI/CD); Automation and Infrastructure as Code (IaC); Integrated testing and security; Immediate feedback. “Accelerate,” the iconic, metrics-centric guide to DevOps and scaling high-performing teams, has found certain decisions that are proven to help teams speed up delivery. One is that when teams are empowered to choose which tools they use, this is proven to improve performance. 

Quote for the day:

“Success is actually a short race - a sprint fueled by discipline just long enough for habit to kick in and take over.” -- Gary W. Keller

No comments:

Post a Comment