Daily Tech Digest - July 31, 2024

Rise of Smart Testing: How AI is Revolutionizing Software Testing

Development teams must use new techniques when creating and testing applications. Although traditional frameworks frequently necessitate a great deal of manual labor for script construction and maintenance, test automation can greatly increase productivity and accuracy. This may restrict their efficacy and capacity to grow. ... Agile development is based on continuous improvement, but rapid code changes can put a burden on conventional test automation techniques. This is where self-healing test automation scripts come in. The release cycle is slowed down by test cases that become delicate and need ongoing maintenance. Frameworks with AI capabilities are able to recognize these changes and adjust accordingly. This translates into shorter release cycles, less maintenance overhead, and self-healing test scripts. ... Extensive test coverage is a difficult goal to accomplish using traditional testing techniques. Artificial intelligence (AI) fills this gap by automatically generating a wide range of test cases by evaluating requirements, code, and previous tests. This covers scenarios—both good and negative—that human testers might overlook or edge cases. 


What CISOs need to keep CEOs (and themselves) out of jail

Considering the changes in the Cyber Security Framework 2.0 (CSF 2.0) emphasizing governance and communication with the board of directors, Sullivan is right to assume that liability will not stop at the CISO and will likely move upwards. In his essay, Sullivan urges CEOs to give CISOs greater resources to do their jobs. But if he’s talking about funding to purchase more security controls, this might be a hard sell for CEOs. ... CEOs would benefit from showing that they care about cybersecurity and adding metrics to company reports to demonstrate it is a significant concern. For CISOs, agreeing to a set of metrics with the CEO would provide a visible North Star and a forcing function for aligning resources and headcount to ensure metrics continue to trend in the right direction. ... CEOs that are serious about cybersecurity must prioritize collaboration with their CISOs and putting them in the rotation for regular meetings. A healthy budget increase for tools may be necessary as AI injects many new risks, but it’s not sufficient nor is it the most important step. CISOs need better people and better processes to deliver on promises of keeping the enterprise safe. 


Who should own cloud costs?

The exponential growth of AI and generative AI initiatives are often identified as the true culprits. Although packed with potential, these advanced technologies consume extensive cloud resources, increasing costs that organizations often struggle to manage effectively. The main issues usually stem from a lack of visibility and control over these expenses. The problems go beyond just tossing around the term “finops” at meetings. It comes down to a fundamental understanding of who owns and controls cloud costs in the organization. Trying to identify cloud cost ownership and control often becomes a confusing free-for-all. ... Why does giving engineering control over cloud costs make such a difference? For one, engineers are typically closer to the actual usage and deployment of cloud resources. When they build something to run on the cloud, they are more aware of how applications and data storage systems use cloud resources. Engineers can quickly identify and rectify inefficiencies, ensuring that cloud resources are used cost-effectively. Moreover, engineers with skin in the game are more likely to align their projects with broader business goals, translating technical decisions into tangible business outcomes.


Generative AI and Observability – How to Know What Good Looks Like

In software development circles, observability is often defined as the combination of logs, traces and metric data to show how applications perform. In classic mechanical engineering and control theory, observability looks at the inputs and outputs for a system to judge how changes affect the results. In practice, looking at the initial requests and what gets returned provides data that can be used for judging performance. Alongside this, there is the quality of the output to consider as well. Did the result answer the user’s question, and how accurate was the answer? Were there any hallucinations in the response that would affect the user? And where did those results come from? Tracking AI hallucination rates across different LLMs and services shows up how those services perform, where the levels of inaccuracy vary from around 2.5 percent to 22.4 percent. All the steps involved around managing your data and generative AI app can affect the quality and speed of response at runtime. For example, retrieval augmented generation (RAG) allows you to find and deliver company data in the right format to the LLM so that this context can provide a more relevant response. 


Security platforms offer help to tame product complexity, but skepticism remains

The biggest issue enterprises cited was what they saw as an inherent contradiction between the notion of a platform, which to them had the connotation of a framework on which things were built, and the specialization of most offerings. “You can’t have five foundations for one building,” one CSO said sourly, and pointed out that there are platforms for network, cloud, data center, application, and probably even physical security. While there was an enterprise hope that platforms would somehow unify security, they actually seemed to divide it. ... It seems to me that divided security responsibility, arising from the lack of a single CSO in charge, is also a factor in the platform question. Vendors who sell into such an account not only have less incentive to promote a unifying security platform vision, they may have a direct motivation not to do that. Of 181 enterprises, 47 admit that their security portfolio was created, and is sustained, by two or more organizations, and every enterprise in this group is without a CSO. Who would a security platform provider call on in these situations? Would any of the organizations involved in security want to share their decision power with another group?


The cost of a data breach continues to escalate

The type of attack influenced the financial damage, the report noted. Destructive attacks, in which the bad actors delete data and destroy systems, cost the most: $5.68 million per breach ($5.23 million in 2023). Data exfiltration, in which data is stolen, and ransomware, in which data is encrypted and a ransom demanded, came second and third, at $5.21 million and $4.91 million respectively. However, noted Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, sometimes attackers combine their tactics. “Double extortion ransomware attacks are a key factor that is influencing the cost of data breaches,” he said in an email. “Since 2023, we have observed that ransomware attacks now include double extortion attacks ... “This risk of shadow data will become even more elevated in the AI era, with data serving as the foundation on which new AI-powered applications and use-cases are being built,” added Jennifer Kady, vice president, security at IBM. “Gaining control and visibility over shadow data from a security perspective has emerged as a top priority as companies move quickly to adopt generative AI, while also ensuring security and privacy are at the forefront.”


If You are Reachable, You Are Breachable, and Firewalls & VPNs are the Front Door

It’s about understanding that the network is no longer a castle to be fortified but a conduit only, with entity-to-entity access authorized discretely for every connection based on business policies informed by the identity and context of the entities connecting. Gone are IP-based policies and ACLs, persistent tunnels, trusted and untrusted zones, and implicit trust. With a zero-trust architecture in place, the internet becomes the corporate network and point-to-point networking fades in relevance over time. Firewalls become like the mainframe – serving a diminishing set of legacy functions – and no longer hindering the agility of a mobile and cloud-driven enterprise. This shift is not just a technical necessity but also a regulatory and compliance imperative. With government bodies mandating zero-trust models and new SEC regulations requiring breach reporting, warning shots have been fired. Cybersecurity is no longer just an IT issue; it has elevated to a boardroom priority, with far-reaching implications for business continuity and reputation. Many access control solutions have claimed to adopt zero-trust by adding dynamic trust. 


Indian construction industry leads digital transformation in Asia pacific

“While challenges like the increasing prices of raw materials and growing competition persist in the Indian market, its current strong economic state and steady outlook for the forthcoming years, as reported by the IMF, have provided a congenial atmosphere for businesses to evaluate and adopt newer technologies, and consequently lead the Asia Pacific market in terms of investments in transformational technologies. Indian businesses have aptly recognised this phase as the ideal time to leverage digital technologies to identify newer growth pockets, usher in efficiencies throughout project lifecycles and give them a competitive edge,” said Sumit Oberoi, Senior Industry Strategist, Asia Pacific at Autodesk. “Priority areas for construction businesses to improve digital adoption include starting small, selecting a digital champion, tracking a range of success measures, and asking whether your business is AI ready.” he added. ... David Rumbens, Partner at Deloitte Access Economics, said, “The Indian construction sector, fuelled by a surge in demand for affordable housing as well as supportive government policies to boost urban infrastructure, is poised to make a strong contribution as India’s economy grows by 6.9% over the next year


Recovering from CrowdStrike, Prepping for the Next Incident

In the future, organizations could consider whether outside factors make a potential software acquisition riskier, Sayers said. A product widely used by Fortune 100 companies, for example, has the added risk of being an attractive target to attackers hoping to hit many such victims in a single attack. “There is a soft underbelly in the global IT world, where you can have instances where a particular piece of software or a particular vendor is so heavily relied upon that they themselves could potentially become a target in the future,” Sayers said. Organizations also need to identify any single points of failure in their environments — instances where they rely on an IT solution whose disruption, whether deliberate or accidental, could disrupt their whole organization. When one is identified, they need to begin planning around the risks and looking for backup processes. Sayers noted that some types of resiliency measures may be too expensive for most organizations to adopt; some entities are already priced out of just backing up all their data and many would be unable to afford maintaining backup, alternate IT infrastructure to which they could roll over.


AI And Security: It Is Complicated But Doesn't Need To Be

While AI may present a potential risk for companies, it could also be part of the solution. As AI processes information differently from humans, it can look at issues differently and come up with breakthrough solutions. For example, AI produces better algorithms and can solve mathematical problems that humans have struggled with for many years. As such, when it comes to information security, algorithms are king and AI, Machine Learning (ML) or a similar cognitive computing technology, could come up with a way to secure data. This is a real benefit of AI as it can not only identify and sort massive amounts of information, but it can identify patterns allowing organisations to see things that they never noticed before. This brings a whole new element to information security. ... As these solutions will bring benefits to the workplace, companies may consider putting non-sensitive data into systems to limit exposure of internal data sets while driving efficiency across the organisation. However, organisations need to realise that they can’t have it both ways, and data they put into such systems will not remain private.



Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad

No comments:

Post a Comment