Showing posts with label algorithm. Show all posts
Showing posts with label algorithm. Show all posts

Daily Tech Digest - September 08, 2025


Quote for the day:

"Let no feeling of discouragement prey upon you, and in the end you are sure to succeed." -- Abraham Lincoln


Coding With AI Assistants: Faster Performance, Bigger Flaws

One challenge comes in the form of how AI coding assistants tend to package their code. Rather than delivering bite-size pieces, they generally deliver larger code pull requests for porting into the main project repository. Apiiro saw AI code assistants deliver three to four times as many code commits - meaning changes to a code repository - than non-AI code assistants, but packaging fewer pull requests. The problem is that larger PRs are inherently riskier and more time-consuming to verify. "Bigger, multi-touch PRs slow review, dilute reviewer attention and raise the odds that a subtle break slips through," said Itay Nussbaum, a product manager at Apiiro. ... At the same time, the tools generated deeper problems, in the form of a 150% increase in architectural flaws and an 300% increase in privilege issues. "These are the kinds of issues scanners miss and reviewers struggle to spot - broken auth flows, insecure designs, systemic weaknesses," Nussbaum said. "In other words, AI is fixing the typos but creating the time bombs." The tools also have a greater tendency to leak cloud credentials. "Our analysis found that AI-assisted developers exposed Azure service principals and storage access keys nearly twice as often as their non-AI peers," Nussbaum said. "Unlike a bug that can be caught in testing, a leaked key is live access: an immediate path into the production cloud infrastructure."


IT Leadership Is More Change Management Than Technical Management

Planning is considered critical in business to keep an organization moving forward in a predictable way, but Mahon doesn’t believe in the traditional annual and long-term planning in which lots of time is invested in creating the perfect plan which is then executed. “Never get too engaged in planning. You have a plan, but it’s pretty broad and open-ended. The North Star is very fuzzy, and it never gets to be a pinpoint [because] you need to focus on all the stuff that's going on around you,” says Mahon. “You should know exactly what you're going to do in the next two to three months. From three to six months out, you have a really good idea what you're going to do but be prepared to change. And from six to nine months or a year, [I wait until] we get three months away before I focus on it because tech and business needs change rapidly.” ... “The good ideas are mostly common knowledge. To be honest, I don’t think there are any good self-help books. Instead, I have a leadership coach who is also my mental health coach,” says Mahon. “Books try to get you to change who you are, and it doesn’t work. Be yourself. I have a leadership coach who points out my flaws, 90% of which I’m already aware of. His philosophy is don’t try to fix the flaw, address the flaw so, for example, I’m mindful about my tendency to speak too directly.”


The Anatomy of SCREAM: A Perfect Storm in EA Cupboard

SCREAM- Situational Chaotic Realities of Enterprise Architecture Management- captures the current state of EA practice, where most organizations, from medium to large complexity, struggle to derive optimal value from investments in enterprise architecture capabilities. It’s the persistent legacy challenges across technology stacks and ecosystems that need to be solved to meet strategic business goals and those moments when sudden, ill-defined executive needs are met with a hasty, reactive sprint, leading to a fractured and ultimately paralyzing effect on the entire organization. ... The paradox is that the very technologies offering solutions to business challenges are also key sources of architectural chaos, further entrenching reactive SCREAM. As noted, the inevitable chaos and fragmentation that emerge from continuous technology additions lead to silos and escalating compatibility issues. ... The chaos of SCREAM is not just an external force; it’s a product of our own making. While we preach alignment to the business, we often get caught up in our own storm in an EA cupboard. How often do we play EA on EA? ... While pockets of recognizable EA wins may exist through effective engagement, a true, repeatable value-add requires a seat at the strategic table. This means “architecture-first” must evolve beyond being a mere buzzword or a token effort, becoming a reliable approach that promotes collaborative success rather than individual credit-grabbing.


How Does Network Security Handle AI?

Detecting when AI models begin to vary and yield unusual results is the province of AI specialists, users and possibly the IT applications staff. But the network group still has a role in uncovering unexpected behavior. That role includes: Properly securing all AI models and data repositories on the network. Continuously monitoring all access points to the data and the AI system. Regularly scanning for network viruses and any other cyber invaders that might be lurking. ... both application and network teams need to ensure strict QA principles across the entire project -- much like network vulnerability testing. Develop as many adversarial prompt tests coming from as many different directions and perspectives as you can. Then try to break the AI system in the same way a perpetrator would. Patch up any holes you find in the process. ... Apply least privilege access to any AI resource on the network and continually monitor network traffic. This philosophy should also apply to those on the AI application side. Constrict the AI model being used to the specific use cases for which it was intended. In this way, the AI resource rejects any prompts not directly related to its purpose. ... Red teaming is ethical hacking. In other words, deploy a team whose goal is to probe and exploit the network in any way it can. The aim is to uncover any network or AI vulnerability before a bad actor does the same.


Lack of board access: The No. 1 factor for CISO dissatisfaction

CISOs who don’t get access to the board are often buried within their organizations. “There are a lot of companies that will hire at a director level or even a senior manager level and call it a CISO. But they don’t have the authority and scope to actually be able to execute what a CISO does,” says Nick Kathmann, CISO at LogicGate. Instead of reporting directly to the board or CEO, these CISOs will report to a CIO, CTO or other executive, despite the problems that can arise in this type of reporting structure. CIOs and CTOs are often tasked with implementing new technology. The CISO’s job is to identity risks and ensure the organization is secure. “If the CIO doesn’t like those risks or doesn’t want to do anything to fix those risks, they’ll essentially suppress them [CISOs] as much as they can,” says Kathmann. ... Getting in front of the board is one thing. Effectively communicating cybersecurity needs and getting them met is another. It starts with forming relationships with C-suite peers. Whether CISOs are still reporting up to another executive or not, they need to understand their peers’ priorities and how cybersecurity can mesh with those. “The CISO job is an executive job. As an executive, you rely completely on your peer relationships. You can’t do anything as an executive in a vacuum,” says Barrack. Working in collaboration, rather than contention, with other executives can prepare CISOs to make the most of their time in front of the board.


From Vault Sprawl to Governance: How Modern DevOps Teams Can Solve the Multi-Cloud Secrets Management Nightmare

Every time an application is updated or a new service is deployed, one or multiple new identities are born. These NHIs include service accounts, CI/CD pipelines, containers, and other machine workloads, the running pieces of software that connect to other resources and systems to do work. Enterprises now commonly see 100 or more NHIs for every single human identity. And that number keeps growing. ... Fixing this problem is possible, but it requires an intentional strategy. The first step is creating a centralized inventory of all secrets. This includes secrets stored in vaults, embedded in code, or left exposed in CI/CD pipelines and environments. Orphaned and outdated secrets should be identified and removed. Next, organizations must shift left. Developers and DevOps teams require tools to detect secrets early, before they are committed to source control or merged into production. Educating teams and embedding detection into the development process significantly reduces accidental leaks. Governance must also include lifecycle mapping. Secrets should be enriched with metadata such as owner, creation date, usage frequency, and last rotation. Automated expiration and renewal policies help enforce consistency and reduce long-term risk. Contributions should be both product- and vendor-agnostic, focusing on market insights and thought leadership.


Digital Public Infrastructure: The backbone of rural financial inclusion

When combined, these infrastructures — UPI for payments, ONDC for commerce, AAs for credit, CSCs for handholding support and broadband for connectivity form a powerful ecosystem. Together, these enable a farmer to sell beyond the village, receive instant payment and leverage that income proof for a micro-loan, all within a seamless digital journey. Adding to this, e-KYC ensures that identity verification is quick, low-cost and paperless, while AePS provides last-mile access to cash and banking services, ensuring inclusion even for those outside the smartphone ecosystem. This integration reduces dependence on middlemen, enhances transparency and fosters entrepreneurship. ...  Of course, progress does not mean perfection. There are challenges that must be addressed with urgency and sensitivity. Many rural merchants hesitate to fully embrace digital commerce due to uncertainties around Goods and Services Tax (GST) compliance. Digital literacy, though improving, still varies widely, particularly among older populations and women. Infrastructure costs such as last-mile broadband and device affordability remain burdensome for small operators. These are not reasons to slow down but opportunities to fine-tune policy. Simplifying tax processes for micro-enterprises, investing in vernacular digital literacy programmes, subsidising rural connectivity and embedding financial education into community touchpoints such as CSCs will be essential to ensure no one is left behind.


Cybersecurity research is getting new ethics rules, here’s what you need to know

Ethics analysis should not be treated as a one-time checklist. Stakeholder concerns can shift as a project develops, and researchers may need to revisit their analysis as they move from design to execution to publication. ...“Stakeholder ethical concerns impact academia, industry, and government,” Kalu said. “Security teams should replace reflexive defensiveness with structured collaboration: recognize good-faith research, provide intake channels and SLAs, support coordinated disclosure and pre-publication briefings, and engage on mitigation timelines. A balanced, invitational posture, rather than an adversarial one, will reduce harm, speed remediation, and encourage researchers to keep working on that project.” ... While the new requirements target academic publishing, the ideas extend to industry practice. Security teams often face similar dilemmas when deciding whether to disclose vulnerabilities, release tools, or adopt new defensive methods. Thinking in terms of stakeholders provides a way to weigh the benefits and risks of those decisions. ... Peng said ethical standards should be understood as “scaffolds that empower thoughtful research,” providing clarity and consistency without blocking exploration of adversarial scenarios. “By building ethics into the process from the start and revisiting it as research develops, we can both protect stakeholders and ensure researchers can study the potential threats that adversaries, who face no such constraints, may exploit,” she said.


From KYC to KYAI: Why ‘Algorithmic Transparency’ is Now Critical in Banking

This growing push for transparency into AI models has introduced a new acronym to the risk and compliance vernacular: KYAI, or "know your AI." Just like finance institutions must know the important details about their customers, so too must they understand the essential components of their AI models. The imperative has evolved beyond simply knowing "who" to "how." Based on my work helping large banks and other financial institutions integrate AI into their KYC workflows over the last few years, I’ve seen what can happen when these teams spend the time vetting their AI models and applying rigorous transparency standards. And, I’ve seen what can happen when they become overly trusting of black-box algorithms that deliver decisions based on opaque methods with no ability to attribute accountability. The latter rarely ever ends up being the cheapest or fastest way to produce meaningful results. ... The evolution from KYC to KYAI is not merely driven by regulatory pressure; it reflects a fundamental shift in how businesses operate today. Financial institutions that invest in AI transparency will be equipped to build greater trust, reduce operational risks, and maintain auditability without missing a step in innovation. The transformation from black box AI to transparent, governable systems represents one of the most significant operational challenges facing financial institutions today.


Why compliance clouds are essential

From a technical perspective, compliance clouds offer something that traditional clouds can’t match, these are the battle-tested security architectures. By implementing them, the organizations can reduce their data breach risk by 30-40% compared to standard cloud deployments. This is because compliance clouds are constantly reviewed and monitored by third-party experts, ensuring that we are not just getting compliance, but getting an enterprise-grade security that’s been validated by some of the most security-conscious organizations in the world. ... What’s particularly interesting is that 58% of this market is software focused. As organizations prioritize automation and efficiency in managing complex regulatory requirements, this number is set to grow further. Over 75% of federal agencies have already shifted to cloud-based software to meet evolving compliance needs. Following this, we at our organizations have also achieved FedRAMP® High Ready compliance for Cloud. ... Cloud compliance solutions deliver far-reaching benefits that extend well beyond regulatory adherence, offering a powerful mix of cost efficiency, trust building, adaptability, and innovation enablement. ... In an era where trust is a competitive currency, compliance cloud certifications serve as strong differentiators, signaling an organization’s unwavering commitment to data protection and regulatory excellence.

Daily Tech Digest - June 10, 2025


Quote for the day:

"Life is not about finding yourself. Life is about creating yourself." -- Lolly Daskal


AI Is Making Cybercrime Quieter and Quicker

The rise of AI-enabled cybercrime is no longer theoretical. Nearly 72% of organisations In India said that they have encountered AI-powered cyber threats in the past year. These threats are scaling fast, with a 2X increase reported by 70% and a 3X increase by 12% of organisations. This new class of AI-powered threats are harder to detect and often exploit weaknesses in human behaviour, misconfigurations, and identity systems. In India, the top AI-driven threats reported include AI-assisted credential stuffing and brute force attacks, Deepfake impersonation in business email compromise (BEC), AI-powered malware (Polymorphic malware), Automated reconnaissance of attack surfaces, and AI-generated phishing emails. ... The most disruptive threats are no longer the most obvious. Topping the list are unpatched and zero-day exploits, followed closely by insider threats, cloud misconfigurations, software supply chain attacks, and human error. These threats are particularly damaging because they often go undetected by traditional defences, exploiting internal weaknesses and visibility gaps. As a result, these quieter, more complex risks are now viewed as more dangerous than well-known threats like ransomware or phishing. Traditional threats such as phishing and malware are still growing at a rate of ~10%, but this is comparatively modest —likely due to mature defences like endpoint protection and awareness training.


The Evolution and Future of the Relationship Between Business and IT

IT professionals increasingly serve as translators — converting executive goals into technical requirements, and turning technical realities into actionable business decisions. This fusion of roles has also led to the rise of cross-functional “fusion teams,” where IT and business units co-own projects from ideation through execution. ... Artificial Intelligence is already influencing how decisions are made and systems are managed. From intelligent automation to predictive analytics, AI is redefining productivity. According to a PwC report, AI is expected to contribute over $15 trillion to the global economy by 2030 — and IT organizations will play a pivotal role in enabling this transformation. At the same time, the lines between IT and the business will continue to blur. Platforms like low-code development tools, AI copilots, and intelligent data fabrics will empower business users to create solutions without traditional IT support — requiring IT teams to pivot further into governance, enablement, and strategy. Security, compliance, and data privacy will become even more important as businesses operate across fragmented and federated environments. ... The business-IT relationship has evolved from one rooted in infrastructure ownership to one centered on service integration, strategic alignment, and value delivery. IT is no longer just the department that runs servers or writes code — it’s the nervous system that connects capabilities, ensures reliability, and enables growth.


Can regulators trust black-box algorithms to enforce financial fairness?

Regulators, in their attempt to maintain oversight and comparability, often opt for rules-based regulation, said DiRollo. These are prescriptive, detailed requirements intended to eliminate ambiguity. However, this approach unintentionally creates a disproportionate burden on smaller institutions, he continued, DiRollo said, “Each bank must effectively build its own data architecture to interpret and implement regulatory requirements. For instance, calculating Risk-Weighted Assets (RWAs) demands banks to collate data across a myriad of systems, map this data into a bespoke regulatory model, apply overlays and assumptions to reflect the intent of the rule and interpret evolving guidance and submit reports accordingly.” ... Secondly around regulatory arbitrage. In this area, larger institutions with more sophisticated modelling capabilities can structure their portfolios or data in ways that reduce regulatory burdens without a corresponding reduction in actual risk. “The implication is stark: the fairness that regulators seek to enforce is undermined by the very framework designed to ensure it,” said DiRollo. While institutions pour effort into interpreting rules and submitting reports, the focus drifts from identifying and managing real risks. In practice, compliance becomes a proxy for safety – a dangerous assumption, in the words of DiRollo.


The legal questions to ask when your systems go dark

Legal should assume the worst and lean into their natural legal pessimism. There’s very little time to react, and it’s better to overreact than underreact (or not react at all). The legal context around cyber incidents is broad, but assume the worst-case scenario like a massive data breach. If that turns out to be wrong, even better! ... Even if your organization has a detailed incident response plan, chances are no one’s ever read it and that there will be people claiming “that’s not my job.” Don’t get caught up in that. Be the one who brings together management, IT, PR, and legal at the same table, and coordinate efforts from the legal perspective. ... If that means “my DPO will check the ROPA” – congrats! But if your processes are still a work in progress, you’re likely about to run a rapid, ad hoc data inventory: involving all departments, identifying data types, locations, and access controls. Yes, it will all be happening while systems are down and everyone’s panicking. But hey – serenity now, emotional damage later. You literally went to law school for this. ... You, as in-house or external legal support, really have to understand the organization and how its tech workflows actually function. I dream of a world where lawyers finally stop saying “we’ll just do the legal stuff,” because “legal stuff” remains abstract and therefore ineffective if you don’t put it in the context of a particular organization.


New Quantum Algorithm Factors Numbers With One Qubit

Ultimately, the new approach works because of how it encodes information. Classical computers use bits, which can take one of two values. Qubits, the quantum equivalent, can take on multiple values, because of the vagaries of quantum mechanics. But even qubits, once measured, can take on only one of two values, a 0 or a 1. But that’s not the only way to encode data in quantum devices, say Robert König and Lukas Brenner of the Technical University of Munich. Their work focuses on ways to encode information with continuous variables, meaning they can take on any values in a given range, instead of just certain ones. ... In the past, researchers have tried to improve on Shor’s algorithm for factoring by simulating a qubit using a continuous system, with its expanded set of possible values. But even if your system computes with continuous qubits, it will still need a lot of them to factor numbers, and it won’t necessarily go any faster. “We were wondering whether there’s a better way of using continuous variable systems,” König said. They decided to go back to basics. The secret to Shor’s algorithm is that it uses the number it’s factoring to generate what researchers call a periodic function, which has repeating values at regular intervals. Then it uses a mathematical tool called a quantum Fourier transform to identify the value of that period — how long it takes for the function to repeat.


What Are Large Action Models?

LAMs are LLMs trained on specific actions and enhanced with real connectivity to external data and systems. This makes the agents they power more robust than basic LLMs, which are limited to reasoning, retrieval and text generation. Whereas LLMs are more general-purpose, trained on a large data corpus, LAMs are more task-oriented. “LAMs fine-tune an LLM to specifically be good at recommending actions to complete a goal,” Jason Fournier, vice president of AI initiatives at the education platform Imagine Learning, told The New Stack. ... LAMs trained on internal actions could streamline industry-specific workflows as well. Imagine Learning, for instance, has developed a curriculum-informed AI framework to support teachers and students with AI-powered lesson planning. Fournier sees promise in automating administrative tasks like student registration, synthesizing data for educators and enhancing the learning experience. Or, Willson said, consider marketing: “You could tell an agentic AI platform with LAM technology, ‘Launch our new product campaign for the ACME software across all our channels with our standard messaging framework.'” Capabilities like this could save time, ensure brand consistency, and free teams to focus on high-level strategy.


Five mistakes companies make when retiring IT equipment: And how to avoid them

Outdated or unused IT assets often sit idle in storage closets, server rooms, or even employee homes for extended periods. This delay in decommissioning can create a host of problems. Unsecured, unused devices are prime targets for data breaches, theft, or accidental loss. Additionally, without a timely and consistent retirement process, organizations lose visibility into asset status, which can create confusion, non-compliance, or unnecessary costs. The best way to address this is by implementing in-house destruction solutions as an integrated part of the IT lifecycle. Rather than relying on external vendors or waiting until large volumes of devices pile up, organizations can equip themselves with high security data destruction machinery – such as hard drive shredders, degaussers, crushers, or disintegrators – designed to render data irretrievable on demand. This allows for immediate, on-site sanitization and physical destruction as soon as devices are decommissioned. Not only does this improve data control and reduce risk exposure, but it also simplifies chain-of-custody tracking by eliminating unnecessary handoffs. With in-house destruction capabilities, organizations can securely retire equipment at the pace their operations demand – no waiting, no outsourcing, and no compromise.


Event Sourcing Unpacked: The What, Why, and How

Event Sourcing offers significant benefits for systems that require persistent audit trails, rich debugging capabilities with event replay. It is especially effective in domains like finance, healthcare, e-commerce, and IoT, where every transaction or state change is critical and must be traceable. However, its complexity means that it isn’t ideal for every scenario. For applications that primarily engage in basic CRUD operations or demand immediate consistency, the overhead of managing an ever-growing event log, handling event schema evolution, and coping with eventual consistency can outweigh the benefits. In such cases, simpler persistence models may be more appropriate. When compared with related patterns, Event Sourcing naturally complements CQRS by decoupling read and write operations, and it enhances Domain-Driven Design by providing a historical record of domain events. Additionally, it underpins Event-Driven Architectures by facilitating loosely coupled, scalable communication. The decision to implement Event Sourcing should therefore balance its powerful capabilities against the operational and developmental complexities it introduces, ensuring it aligns with the project’s specific needs and long-term architectural goals.


Using Traffic Mirroring to Debug and Test Microservices in Production-Like Environments

At its core, traffic mirroring duplicates incoming requests so that, while one copy is served by the primary service, the other is sent to an identical service running in a test or staging environment. The response from the mirrored service is never returned to the client; it exists solely to let engineers observe, compare, or process data from real-world usage. ... Real-world traffic is messy. Certain bugs only appear when a request contains a specific sequence of API calls or unexpected data patterns. By mirroring production traffic to a shadow service, developers can catch these hard-to-reproduce errors in a controlled environment. ... Mirroring production traffic allows teams to observe how a new service version handles the same load as its predecessor. This testing is particularly useful for identifying regressions in response time or resource utilization. Teams can compare metrics like CPU usage, memory consumption, and request latency between the primary and shadow services to determine whether code changes negatively affect performance. Before rolling out a new feature, developers must ensure it works correctly under production conditions. Traffic mirroring lets a new microservice version be deployed with feature flags while still serving requests from the stable version.


Don’t be a victim of high cloud costs

The simplest reason for the rising expenses associated with cloud services is that major cloud service providers consistently increase their prices. Although competition among these providers helps keep prices stable to some extent, businesses now face inflation, the introduction of new premium services, and the complex nature of pricing models, which are often shrouded in mystery. All these factors complicate cost management. Meanwhile, many businesses have inefficient usage patterns. The typical approach to adoption involves migrating existing systems to the cloud without modifying or improving their functions for cloud environments. This “lift and shift” shortcut often leads to inefficient resource allocation and unnecessary expenses. ... First, before embracing cloud technology for its advantages, companies should develop a well-defined plan that outlines the rationale, objectives, and approach to using cloud services. Identify which tasks are suitable for cloud deployment and which are not, and assess whether a public, private, or hybrid cloud setup aligns with your business and budget objectives. Second, before transferring data, ensure that you optimize your tasks to improve efficiency and performance. Please resist the urge to move existing systems to the cloud in their current state. ... Third, effectively managing cloud expenses relies on implementing strong governance practices.

Daily Tech Digest - April 24, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni



Algorithm can make AI responses increasingly reliable with less computational overhead

The algorithm uses the structure according to which the language information is organized in the AI's large language model (LLM) to find related information. The models divide the language information in their training data into word parts. The semantic and syntactic relationships between the word parts are then arranged as connecting arrows—known in the field as vectors—in a multidimensional space. The dimensions of space, which can number in the thousands, arise from the relationship parameters that the LLM independently identifies during training using the general data. ... Relational arrows pointing in the same direction in this vector space indicate a strong correlation. The larger the angle between two vectors, the less two units of information relate to one another. The SIFT algorithm developed by ETH researchers now uses the direction of the relationship vector of the input query (prompt) to identify those information relationships that are closely related to the question but at the same time complement each other in terms of content. ... By contrast, the most common method used to date for selecting the information suitable for the answer, known as the nearest neighbor method, tends to accumulate redundant information that is widely available. The difference between the two methods becomes clear when looking at an example of a query prompt that is composed of several pieces of information.


Bring Your Own Malware: ransomware innovates again

The approach taken by DragonForce and Anubis shows that cybercriminals are becoming increasingly sophisticated in the way they market their services to potential affiliates. This marketing approach, in which DragonForce positions itself as a fully-fledged service platform and Anubis offers different revenue models, reflects how ransomware operators behave like “real” companies. Recent research has also shown that some cybercriminals even hire pentesters to test their ransomware for vulnerabilities before deploying it. So it’s not just dark web sites or a division of tasks, but a real ecosystem of clear options for “consumers.” We may also see a modernization of dark web forums, which currently resemble the online platforms of the 2000s. ... Although these developments in the ransomware landscape are worrying, Secureworks researchers also offer practical advice for organizations to protect themselves. Above all, defenders must take “proactive preventive” action. Fortunately and unfortunately, this mainly involves basic measures. Fortunately, because the policies to be implemented are manageable; unfortunately, because there is still a lack of universal awareness of such security practices. In addition, organizations must develop and regularly test an incident response plan to quickly remediate ransomware activities.


Phishing attacks thrive on human behaviour, not lack of skill

Phishing draws heavily from principles of psychology and classic social engineering. Attacks often play on authority bias, prompting individuals to comply with requests from supposed authority figures, such as IT personnel, management, or established brands. Additionally, attackers exploit urgency and scarcity by sending warnings of account suspensions or missed payments, and manipulate familiarity by referencing known organisations or colleagues. Psychologs has explained that many phishing techniques bear resemblance to those used by traditional confidence tricksters. These attacks depend on inducing quick, emotionally-driven decisions that can bypass normal critical thinking defences. The sophistication of phishing is furthered by increasing use of data-driven tactics. As highlighted by TechSplicer, attackers are now gathering publicly available information from sources like LinkedIn and company websites to make their phishing attempts appear more credible and tailored to the recipient. Even experienced professionals often fall for phishing attacks, not due to a lack of intelligence, but because high workload, multitasking, or emotional pressure make it difficult to properly scrutinise every communication. 

What Steve Jobs can teach us about rebranding

Humans like to think of themselves as rational animals, but it comes as no news to marketers that we are motivated to a greater extent by emotions. Logic brings us to conclusions; emotion brings us to action. Whether we are creating a poem or a new brand name, we won’t get very far if we treat the task as an engineering exercise. True, names are formed by putting together parts, just as poems are put together with rhythmic patterns and with rhyming lines, but that totally misses what is essential to a name’s success or a poem’s success. Consider Microsoft and Apple as names. One is far more mechanical, and the other much more effective at creating the beginning of an experience. While both companies are tremendously successful, there is no question that Apple has the stronger, more emotional experience. ... Different stakeholders care about different things. Employees need inspiration; investors need confidence; customers need clarity on what’s in it for them. Break down these audiences and craft tailored messages for each group. Identifying the audience groups can be challenging. While the first layer is obvious—customers, employees, investors, and analysts—all these audiences are easy to find and message. However, what is often overlooked is the individuals in those audiences who can more positively influence the rebrand. It may be a particular journalist, or a few select employees. 


Coaching AI agents: Why your next security hire might be an algorithm

Like any new team member, AI agents need onboarding before operating at maximum efficacy. Without proper onboarding, they risk misclassifying threats, generating excessive false positives, or failing to recognize subtle attack patterns. That’s why more mature agentic AI systems will ask for access to internal documentation, historical incident logs, or chat histories so the system can study them and adapt to the organization. Historical security incidents, environmental details, and incident response playbooks serve as training material, helping it recognize threats within an organization’s unique security landscape. Alternatively, these details can help the agentic system recognize benign activity. For example, once the system knows what are allowed VPN services or which users are authorized to conduct security testing, it will know to mark some alerts related to those services or activities as benign. ... Adapting AI isn’t a one-time event, it’s an ongoing process. Like any team member, agentic AI deployments improve through experience, feedback, and continuous refinement. The first step is maintaining human-in-the-loop oversight. Like any responsible manager, security analysts must regularly review AI-generated reports, verify key findings, and refine conclusions when necessary. 


Cyber insurance is no longer optional, it’s a strategic necessity

Once the DPDPA fully comes into effect, it will significantly alter how companies approach data protection. Many enterprises are already making efforts to manage their exposure, but despite their best intentions, they can still fall victim to breaches. We anticipate that the implementation of DPDPA will likely lead to an increase in the uptake of cyber insurance. This is because the Act clearly outlines that companies may face penalties in the event of a data breach originating from their environment. Since cyber insurance policies often include coverage for fines and penalties, this will become an increasingly important risk-transfer tool. ... The critical question has always been: how can we accurately quantify risk exposure? Specifically, if a certain event were to occur, what would be the financial impact? Today, there are advanced tools and probabilistic models available that allow organisations to answer this question with greater precision. Scenario analyses can now be conducted to simulate potential events and estimate the resulting financial impact. This, in turn, helps enterprises determine the appropriate level of insurance coverage, making the process far more data-driven and objective. Post-incident technology also plays a crucial role in forensic analysis. When an incident occurs, the immediate focus is on containment. 


Adversary-in-the-Middle Attacks Persist – Strategies to Lessen the Impact

One of the most recent examples of an AiTM attack is the attack on Microsoft 365 with the PhaaS toolkit Rockstar 2FA, an updated version of the DadSec/Phoenix kit. In 2024, a Microsoft employee accessed an attachment that led them to a phony website where they authenticated the attacker’s identity through the link. In this instance, the employee was tricked into performing an identity verification session, which granted the attacker entry to their account. ... As more businesses move online, from banks to critical services, fraudsters are more tempted by new targets. The challenges often depend on location and sector, but one thing is clear: Fraud operates without limitations. In the United States, AiTM fraud is progressively targeting financial services, e-commerce and iGaming. For financial services, this means that cybercriminals are intercepting transactions or altering payment details, inducing hefty losses. Concerning e-commerce and marketplaces, attackers are exploiting vulnerabilities to intercept and modify transactions through data manipulation, redirecting payments to their accounts. ... As technology advances and fraud continues to evolve with it, we face the persistent challenge of increased fraudster sophistication, threatening businesses of all sizes. 


From legacy to lakehouse: Centralizing insurance data with Delta Lake

Centralizing data and creating a Delta Lakehouse architecture significantly enhances AI model training and performance, yielding more accurate insights and predictive capabilities. The time-travel functionality of the delta format enables AI systems to access historical data versions for training and testing purposes. A critical consideration emerges regarding enterprise AI platform implementation. Modern AI models, particularly large language models, frequently require real-time data processing capabilities. The machine learning models would target and solve for one use case, but Gen AI has the capability to learn and address multiple use cases at scale. In this context, Delta Lake effectively manages these diverse data requirements, providing a unified data platform for enterprise GenAI initiatives. ... This unification of data engineering, data science and business intelligence workflows contrasts sharply with traditional approaches that required cumbersome data movement between disparate systems (e.g., data lake for exploration, data warehouse for BI, separate ML platforms). Lakehouse creates a synergistic ecosystem, dramatically accelerating the path from raw data collection to deployed AI models generating tangible business value, such as reduced fraud losses, faster claims settlements, more accurate pricing and enhanced customer relationships.


How AI and Data-Driven Decision Making Are Reshaping IT Ops

Rather than relying on intuition, IT decision-makers now lean on insights drawn from operational data, customer feedback, infrastructure performance, and market trends. The objective is simple: make informed decisions that align with broader business goals while minimizing risk and maximizing operational efficiency. With the help of analytics platforms and business intelligence tools, these insights are often transformed into interactive dashboards and visual reports, giving IT teams real-time visibility into performance metrics, system anomalies, and predictive outcomes. A key evolution in this approach is the use of predictive intelligence. Traditional project and service management often fall short when it comes to anticipating issues or forecasting success. ... AI also helps IT teams uncover patterns that are not immediately visible to the human eye. Predictive models built on historical performance data allow organizations to forecast demand, manage workloads more efficiently, and preemptively resolve issues before they disrupt service. This shift not only reduces downtime but also frees up resources to drive innovation across the enterprise. Moreover, companies that embrace data as a core business asset tend to nurture a culture of curiosity and informed experimentation. 


The DFIR Investigative Mindset: Brett Shavers On Thinking Like A Detective

You must be technical. You have to be technically proficient. You have to be able to do the actual technical work. And I’m not to rely on- not to bash a vendor training for a tool training, you have to have tool training, but you have to have exact training on “This is what the registry is, this is how you pull the-” you have to have that information first. The basics. You gotta have the basics, you have the fundamentals. And a lot of people wanna skip that. ... The DF guys, it’s like a criminal case. It’s “This is the computer that was in the back of the trunk of a car, and that’s what we got.” And the IR side is “This is our system and we set up everything and we can capture what we want. We can ignore what we want.” So if you’re looking at it like “Just in case something is gonna be criminal we might want to prepare a little bit,” right? So that makes DF guys really happy. If they’re coming in after the fact of an IR that becomes a case, a criminal case or a civil litigation where the DF comes in, they go, “Wow, this is nice. You guys have everything preserved, set up as if from the start you were prepared for this.” And it’s “We weren’t really prepared. We were prepared for it, we’re hoping it didn’t happen, we got it.” But I’ve walked in where drives are being wiped on a legal case. 


Daily Tech Digest - January 28, 2025


Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki


How Long Does It Take Hackers to Crack Modern Hashing Algorithms?

Because hashing algorithms are one-way functions, the only method to compromise hashed passwords is through brute force techniques. Cyber attackers employ special hardware like GPUs and cracking software (e.g., Hashcat, L0phtcrack, John The Ripper) to execute brute force attacks at scale—typically millions or billions or combinations at a time. Even with these sophisticated purpose-built cracking tools, password cracking times can vary dramatically depending on the specific hashing algorithm used and password length/character combination. ... With readily available GPUs and cracking software, attackers can instantly crack numeric passwords of 13 characters or fewer secured by MD5's 128-bit hash; on the other hand, an 11-character password consisting of numbers, uppercase/lowercase characters, and symbols would take 26.5 thousand years. ... When used with long, complex passwords, SHA256 is nearly impenetrable using brute force methods— an 11 character SHA256 hashed password using numbers, upper/lowercase characters, and symbols takes 2052 years to crack using GPUs and cracking software. However, attackers can instantly crack nine character SHA256-hashed passwords consisting of only numeric or lowercase characters.


Sharply rising IT costs have CIOs threading the needle on innovation

“Within two years, it will be virtually impossible to buy a PC, tablet, laptop, or mobile phone without AI,” Lovelock says. “Whether you want it or not, you’re going to get it sold to you.” Vendors have begun to build AI into software as well, he says, and in many cases, charge customers for the additional functionality. IT consulting services will also add AI-based services to their portfolios. ... But the biggest expected price hikes are for cloud computing services, despite years of expectations that cloud prices wouldn’t increase significantly, Lovelock says. “For many years, CIOs were taught that in the cloud, either prices went down, or you got more functionality, and occasionally both, that the economies of scale accrue to the cloud providers and allow for at least stable prices, if not declines or functional expansion,” he says. “It wasn’t until post-COVID in the energy crisis, followed by staff cost increases, when that story turned around.” ... “Generative AI is no longer seen as a one-size-fits-all solution, and this shift is helping both solutions providers and businesses take a more practical approach,” he says. “We don’t see this as a sign of lower expectations but as a move toward responsible and targeted use of generative AI.”


US takes aim at healthcare cybersecurity with proposed HIPAA changes

The major update to the HIPAA security regulations also requires healthcare organizations to strengthen security incident response plans and procedures, carry out annual penetration tests and compliance audits, among other measures. Many of the proposals cover best practice enterprise security guidelines foundational to any mature cybersecurity program. ... Cybersecurity experts praised the shift to a risk-based approach covered by the security rule revamp, while some expressed concerns that the measures might tax the financial resources of smaller clinics and healthcare providers. “The security measures called for in the proposed rule update are proven to be effective and will mitigate many of the risks currently present in the poorly protected environments of many healthcare payers, providers, and brokers,” said Maurice Uenuma, VP & GM for the Americas and security strategist at data security firm Blancco. ... Uenuma added: “The challenge will be to implement these measures consistently at scale.” Trevor Dearing, director of critical infrastructure at enterprise security tools firm Illumio, praised the shift from prevention to resilience and the risk-based approach implicit in the rule changes, which he compared to the EU’s recently introduced DORA rules for financial sector organizations.


Risk resilience: Navigating the risks that board’s can’t ignore in 2025

The geopolitical landscape is more turbulent than ever. Companies will need to prepare for potential shocks like regional conflicts, supply chain disruptions, or even another pandemic. If geopolitical risks feel dizzyingly complex, scenario planning will be a powerful tool in mapping out different political and economic scenarios. By envisioning various outcomes, boards can better understand their vulnerabilities, prepare tailored responses and enhance risk resilience. To prepare for the year ahead, board and management teams should ask questions such as: How exposed are we to geopolitical risks in our supply chain? Are we engaging effectively with local governments in key regions?  ... The risks of 2025 are formidable, but so are the opportunities for those who lead with purpose. With informed leadership and collaboration, we can navigate the complexities of the modern business environment with confidence and resilience. Resilience will be the defining trait of successful boards and businesses in the years ahead. It requires not only addressing known risks but also preparing for the unexpected. By prioritising scenario planning, fostering a culture of transparency, and aligning risk management with strategic goals, boards can navigate uncertainty with confidence.


Freedom from Cyber Threats: An AI-powered Republic on the Rise

Developing a resilient AI-driven cybersecurity infrastructure requires substantial investment. The Indian government’s allocation of over ₹550 crores to AI research demonstrates its commitment to innovation and data security. Collaborations with leading cybersecurity companies exemplify scalable solutions to secure digital ecosystems, prioritising resilience, ethical governance, and comprehensive data protection. Research tools like the Gartner Magic Quadrant also offer reliable and useful insights into the leading companies that offer the best and latest SIEM technology solutions. Upskilling the workforce is equally important. Training programs focused on AI-specific cybersecurity skills are preparing India’s talent pool to tackle future challenges effectively. ... Proactive strategies are essential to counter the evolution of cyber threats. Simulation tools enable organizations to anticipate and neutralise potential vulnerabilities. Now, cybersecurity threats can be intercepted by high-class threat detection SIEM data clouds and autonomous threat sweeps. Advanced threat research, conducted by dedicated labs within organisations, plays a crucial role in uncovering emerging attack vectors and providing actionable insights to pre-empt potential breaches. 


Enterprises are hitting a 'speed limit' in deploying Gen AI - here's why

The regulatory issue, the report states, makes clear "respondents' unease about which use cases will be acceptable, and to what extent their organizations will be held accountable for Gen AI-related problems." ... The latest iteration was conducted in July through September, and received 2,773 responses from "senior leaders in their organizations and included board and C-suite members, and those at the president, vice president, and director level," from 14 countries, including the US, UK, Brazil, Germany, Japan, Singapore, and Australia, and across industries including energy, finance, healthcare, and media and telecom. ... Despite the slow pace, Deloitte's CTO is confident in the continued development, and ultimate deployment, of Gen AI. "GenAI and AI broadly is our reality -- it's not going away," writes Bawa. Gen AI is ultimately like the Internet, cloud computing, and mobile waves that preceded it, he asserts. Those "transformational opportunities weren't uncovered overnight," he says, "but as they became pervasive, they drove significant disruption to business and technology capabilities, and also triggered many new business models, new products and services, new partnerships, and new ways of working and countless other innovations that led to the next wave across industries."


NVMe-oF Substantially Reduces Data Access Latency

NVMe-oF is a network protocol that extends the parallel access and low latency features of Nonvolatile Memory Express (NVMe) protocol across networked storage. Originally designed for local storage and common in direct-attached storage (DAS) architectures, NVMe delivers high-speed data access and low latency by directly interfacing with solid-state disks. NVMe-oF allows these same advantages to be achieved in distributed and clustered environments by enabling external storage to perform as if it were local. ... Storage targets can be dynamically shared among workloads, thus providing composable storage resources that provide flexibility, agility and greater resource efficiency. The adoption of NVMe-oF is evident across industries where high performance, efficiency and low latency at scale are critical. Notable market sectors include: financial services, e-commerce, AI and machine learning, and specialty cloud service providers (CSPs). Legacy VM migration, real-time analytics, high-frequency trading, online transaction processing (OLTP) and the rapid development of cloud native, performance-intensive workloads at scale are use cases that have compelled organizations to modernize their data platforms with NVMe-oF solutions. Its ability to handle massive data flows with efficiency and high-performance makes it indispensable for I/O-intensive workloads.


The crisis of AI’s hidden costs

Let me paint you a picture of what keeps CFOs up at night. Imagine walking into a massive data center where 87% of the computers sit there, humming away, doing nothing. Sounds crazy, right? That’s exactly what’s happening in your cloud environment. If you manage a typical enterprise cloud computing operation, you are wasting money. It’s not rare to see companies spend $1 million monthly on cloud resources, with 75% to 80% of that amount going right out the window. It’s no mystery what this means for your bottom line. ... Smart enterprises aren’t just hoping the problem will disappear; they’re taking action. Here’s my advice: Don’t rely solely on the basic tools offered by your cloud provider; they won’t give you the immediate cost visibility you need. Instead, invest in third-party solutions that provide a clear, up-to-the-minute picture of your resource utilization. Focus on power-hungry GPUs running AI workloads. ... Rather than spinning up more instances, consider rightsizing. Modern instance types offered by public cloud providers can give you more bang for your buck. ... Predictive analytics can help you scale up or down based on demand, ensuring you’re not paying for idle resources. ... Be strategic and look at the bigger picture. Evaluate reserved instances and savings plans to balance cost and performance. 


AI security posture management will be needed before agentic AI takes hold

We’ve run into these issues when most companies shifted their workloads to the cloud. Authentication issues – like the dreaded S3 bucket that had a default public setting and that was the cause of way too many breaches before it was secure by default – became the domain of cloud security posture management (CSPM) tools before they were swallowed up by the CNAPP acronym. Identity and permission issues (or entitlements, if you prefer) became the alphabet soup of CIEM (cloud identity entitlement management), thankfully now also under the umbrella of CNAPP. AI bots will need to be monitored by similar toolsets, but they don’t exist yet. I’ll go out on a limb and suggest SAFAI (pronounced Sah-fy) as an acronym: Security Assessment Frameworks for AI. These would, much like CNAPP tools, embed themselves in agentless or transparent fashion, crawl through your AI bots collecting configuration, authentication and permission issues and highlight the pain points. You’d still need the standard panoply of other tools to protect you, since they sit atop the same infrastructure. And that’s on top of worrying about prompt injection opportunities, which is something you unfortunately have no control over as they are based entirely on the models and how they are used.


Hackers Use Malicious PDFs, pose as USPS in Mobile Phishing Scam

The bad actors make the malicious PDFs look like communications from the USPS that are sent via SMS text messages and use what the researchers called in a report Monday a “never-before-seen means of obfuscation” to help them bypass traditional security controls. They embed the malicious links in the PDF, essentially hiding them from endpoint security solutions. ... The phishing attacks are part of a larger and growing trend of what Zimperium calls “mishing,” an umbrella word for campaigns that use email, text messages, voice calls, or QR codes that exploit such weaknesses as unsafe user behavior and minimal security on many mobile devices to infiltrate corporate networks and steal information. ... “We’re witnessing phishing evolve in real time beyond email into a sophisticated multi-channel threat, with attackers leveraging trusted brands like USPS, Royal Mail, La Poste, Deutsche Post, and Australian Post to exploit limited mobile device security worldwide,” Kowski said. “The discovery of over 20 malicious PDFs and 630 phishing pages targeting organizations across 50+ countries shows how threat actors capitalize on users’ trust in official-looking communications on mobile devices.” He also noted that internal disagreements are hampering corporations’ ability to protect against such attacks.


Daily Tech Digest - November 23, 2024

AI Regulation Readiness: A Guide for Businesses

The first thing to note about AI compliance today is that few laws and other regulations are currently on the books that impact the way businesses use AI. Most regulations designed specifically for AI remain in draft form. That said, there are a host of other regulations — like the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and the Personal Information Protection and Electronic Documents Act (PIPEDA) — that have important implications for AI. These compliance laws were written before the emergence of modern generative AI technology placed AI onto the radar screens of businesses (and regulators) everywhere, and they mention AI sparingly if at all. But these laws do impose strict requirements related to data privacy and security. Since AI and data go hand-in-hand, you can't deploy AI in a compliant way without ensuring that you manage and secure data as current regulations require. This is why businesses shouldn't think of AI as an anything-goes space due to the lack of regulations focused on AI specifically. Effectively, AI regulations already exist in the form of data privacy rules. 


Cloud vs. On-Prem AI Accelerators: Choosing the Best Fit for Your AI Workloads

Like most types of hardware, AI accelerators can run either on-prem or in the cloud. An on-prem accelerator is one that you install in servers you manage yourself. This requires you to purchase the accelerator and a server capable of hosting it, set them up, and manage them on an ongoing basis. A cloud-based accelerator is one that a cloud vendor makes available to customers over the internet using an IaaS model. Typically, to access a cloud-based accelerator, you'd choose a cloud server instance designed for AI. For example, Amazon offers EC2 cloud server instances that feature its Trainium AI accelerator chip. Google Cloud offers Tensor Processing Units (TPUs), another type of AI accelerator, as one of its cloud server options. ... Some types of AI accelerators are only available through the cloud. For instance, you can't purchase the AI chips developed by Amazon and Google for use in your own servers. You have to use cloud services to access them. ... Like most cloud-based solutions, cloud AI hardware is very scalable. You can easily add more AI server instances if you need more processing power. This isn't the case with on-prem AI hardware, which is costly and complicated to scale up.


Platform Engineering Is The New DevOps

Platform engineering has provided a useful escape hatch at just the right time. Its popularity has grown strongly, with a well-attended inaugural platform engineering day at KubeCon Paris in early 2024 confirming attendee interest. A platform engineering day was part of the KubeCon NA schedule this past week and will also be included at next year’s KubeCon in London. “I haven't seen platform engineering pushed top down from a C-suite. I've seen a lot of guerilla stuff with platform and ops teams just basically going out and doing a skunkworks thing and sneaking it into production and then making a value case and growing from there,” said Keith Babo, VP of product and marketing at Solo.io. ... “If anyone ever asks me what’s my definition of platform engineering, I tend to think of it as DevOps at scale. It’s how DevOps scales,” says Kennedy. The focus has shifted away from building cloud native technology, done by developers, to using cloud native technology, which is largely the realm of operations. That platform engineering should start to take over from DevOps in this ecosystem may not be surprising, but it does highlight important structural shifts.


Artificial Intelligence and Its Ascendancy in Global Power Dynamics

According to the OECD, AI is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” The vision for Responsible AI is clear: establish global auditing standards, ensure transparency, and protect privacy through secure data governance. Yet, achieving Responsible AI requires more than compliance checklists; it demands proactive governance. For example, the EU’s AI Act takes a hardline approach to regulating high-risk applications like real-time biometric surveillance and automated hiring processes, whereas the U.S., under President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, emphasizes guidelines over strict enforcement. ... AI is becoming the lynchpin of cybersecurity and national security strategies. State-backed actors from China, Iran, and North Korea are weaponizing AI to conduct sophisticated cyber-attacks on critical infrastructure. The deployment of Generative Adversarial Networks (GANs) and WormGPT is automating cyber operations at scale, making traditional defenses increasingly obsolete. In this context, a cohesive, enforceable framework for AI governance is no longer optional but essential. 


Why voice biometrics is a must-have for modern businesses

Voice biometrics are making waves across multiple industries. Here’s a look at how different sectors can leverage this technology for a competitive edge:Financial services: Banks and financial institutions are actively integrating voice verification into call centers, allowing customers to authenticate themselves with their voice, eliminating the need for secret words or pin codes. This strengthens security, reduces time and cost per customer call and enhances the customer experience. Automotive: With the rise of connected vehicles, voice is already heavily used with integrated digital assistants that provide handsfree access to in-car services like navigation, settings and communications. Adding voice recognition allows such in car services to be personalized for the driver and opens the possibilities of more enhancements such as commerce. Automotive brands can integrate voice recognition for offering seamless access to new services like parking, fueling, charging, curbside pick-up by utilizing in-car payments that boost security, convenience and customer satisfaction. Healthcare: Healthcare providers can use voice authentication to securely verify patient identities over the phone or via telemedicine. This ensures that sensitive information remains protected, while providing a seamless experience for patients who may need hands-free options.


When and Where to Rate-Limit: Strategies for Hybrid and Legacy Architectures

While rate-limiting is an essential tool for protecting your system from traffic overloads, applying it directly at the application layer — whether for microservices or legacy applications — is often a suboptimal strategy. ... Legacy systems operate differently. They often rely on vertical scaling and have limited flexibility to handle increased loads. While it might seem logical to apply rate-limiting directly to protect fragile legacy systems, this approach usually falls short. The main issue with rate-limiting at the legacy application layer is that it’s reactive. By the time rate-limiting kicks in, the system might already be overloaded. Legacy systems, lacking the scalability and elasticity of microservices, are more prone to total failure under high load, and rate-limiting at the application level can’t stop this once the traffic surge has already reached its peak. ... Rate-limiting should be handled further upstream rather than deep in the application layer, where it either conflicts with scalability (in microservices) or arrives too late to prevent failures. This leads us to the API gateway, the strategic point in the architecture where traffic control is most effective. 


Survey Surprise: Quantum Now in Action at Almost One-Third of Sites

The use cases for quantum — scientific research, complex simulations — have been documented for a number of years. However, with the arrival of artificial intelligence, particularly generative AI, on the scene, quantum technology may start finding more mainstream business use cases. In a separate report out of Sogeti (a division of Capgemini Group), Akhterul Mustafa calls an impending mashup of generative AI and quantum computing as the “tech world’s version of a dream team, not just changing the game but also pushing the boundaries of what we thought was possible.” ... The convergence of generative AI and quantum computing beings “some pretty epic perks,” Mustafa states. For example, it enables the supercharging of AI models. “Training AI models is a beastly task that needs tons of computing power. Enter quantum computers, which can zip through complex calculations, potentially making AI smarter and faster.” In addition, “quantum computers can sift through massive datasets in a blink. Pair that with generative AI’s knack for cooking up innovative solutions, and you’ve got a recipe for solving brain-bending problems in areas like health, environment, and beyond.”


How Continuous Threat Exposure Management (CTEM) Helps Your Business

A CTEM framework typically includes five phases: identification, prioritization, mitigation, validation, and reporting and improvement. In the first phase, systems are continuously monitored to identify new or emerging vulnerabilities and potential attack vectors. This continuous monitoring is essential to the vulnerability management lifecycle. Identified vulnerabilities are then assessed based on their potential impact on critical assets and business operations. In the mitigation phase, action is taken to defend against high-risk vulnerabilities by applying patches, reconfiguring systems or adjusting security controls. The validation stage focuses on testing defenses to ensure vulnerabilities are properly mitigated and the security posture remains strong. In the final phase of reporting and improvement, IT leaders gain access to security metrics and improved defense routes, based on lessons learned from incident response. ... While both CTEM and vulnerability management aim to identify and remediate security weaknesses, they differ in scope and execution. Vulnerability management is more about targeted and periodic identification of vulnerabilities within an organization based on a set scan window.


DevOps in the Cloud: Leveraging Cloud Services for Optimal DevOps Practices

A well-designed DevOps transformation strategy can help organizations deliver software products and their services quickly and reliably while improving the overall efficiency of their development and delivery processes. ... Cloud platforms facilitate the immediate provisioning of infrastructure components, including servers, storage units, and databases. This helps teams swiftly initiate new development and testing environments, hastening the software development lifecycle. Companies can see a significant decrease in infrastructure provisioning time by integrating cloud services. ... DevOps helps development and operations teams work together. Cloud platforms provide a central place for storing code, configurations, and important files so everyone can be on the same page. Additionally, cloud-based communication and collaboration tools streamline communication and break down silos between teams. ... Cloud services provide a pay-as-you-go system, so there is no need for a large upfront investment in hardware. This way, companies can scale their infrastructure according to their requirements, saving a lot of money. 


Reinforcement learning algorithm provides an efficient way to train more reliable AI agents

To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them. The findings are published on the arXiv preprint server. The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city. By focusing on a smaller number of intersections that contribute the most to the algorithm's overall effectiveness, this method maximizes performance while keeping the training cost low. The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent. "We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,"



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown