Daily Tech Digest - October 17, 2024

Digital addiction detox: Streamline tech to maximize impact, minimize risks

While digital addiction has been extensively studied at the individual level, organizational digital addiction is a relatively new area of concern. This addiction manifests as a tendency for the organization to throw technology mindlessly at any problem, often accumulating useless or misused technologies that generate ongoing costs without delivering proportional value. ... CIOs must simultaneously implement controls to prevent their organizations from reaching a tipping point where healthy exploration transforms into digital addiction. Striking this balance is delicate and requires careful management. Many innovative technology companies have found success by implementing “runways” for new products or technologies. These runways come with specific criteria for either “takeoff” or “takedown”. ... Unchecked technology adoption poses significant risks to organizations, often leading to vulnerabilities in their IT ecosystems. When companies rush to implement technologies without proper planning and safeguards, they lack the resilience to bounce back from adverse conditions because of insufficient redundancy and flexibility within systems, leaving organizations exposed to single points of failure.


Why are we still confused about cloud security?

A prevalent issue is publicly exposed storage, which often includes sensitive data due to excessive permissions, making it a prime target for ransomware attacks. Additionally, the improper use of access keys remains a significant threat, with a staggering 84% of organizations retaining unused highly privileged keys. Such security oversights have historically facilitated breaches, as evidenced by incidents like the MGM Resorts data breach in September 2023. ... Kubernetes environments present another layer of risk. The study notes that 78% of organizations have publicly accessible Kubernetes API servers, with significant portions allowing inbound internet access and unrestricted user control. This lax security posture exacerbates potential vulnerabilities. Addressing these vulnerabilities demands a comprehensive approach. Organizations should adopt a context-driven security ethos by integrating identity, vulnerability, misconfiguration, and data risk information. This unified strategy allows for precise risk assessment and prioritization. Managing Kubernetes access through adherence to Pod Security Standards and limiting privileged containers is essential, as is the regular audit of credentials and permissions to enforce the principle of least privilege.


The Architect’s Guide to Interoperability in the AI Data Stack

At the heart of an AI-driven world is data — lots of it. The choices you make today for storing, processing and analyzing data will directly affect your agility tomorrow. Architecting for interoperability means selecting tools that play nicely across environments, reducing reliance on any single vendor, and allowing your organization to shop for the best pricing or feature set at any given moment. ... Interoperability extends to query engines as well. Clickhouse, Dremio and Trino are great examples of tools that let you query data from multiple sources without needing to migrate it. These tools allow users to connect to a wide range of sources, from cloud data warehouses like Snowflake to traditional databases such as MySQL, PostgreSQL and Microsoft SQL Server. With modern query engines, you can run complex queries on data wherever it resides, helping avoid costly and time-consuming migrations. ... Architecting for interoperability is not just about avoiding vendor lock-in; it’s about building an AI data stack that’s resilient, flexible and cost-effective. By selecting tools that prioritize open standards, you ensure that your organization can evolve and adapt to new technologies without being constrained by legacy decisions. 


The role of compromised cyber-physical devices in modern cyberattacks

A cyber physical device is a device that connects the physical world and computer networks. Many people may associate the term “cyber physical device” with Supervisory Control and Data Acquisition (SCADA) systems and OT network segments, but there’s more to it. Devices that interconnect the physical world give attackers a unique perspective: they allow them to perform on-ground observation of events, to monitor and observe the impact of their attacks, and can even sometimes make an impact on the physical world ... Many devices are compromised for the simple purpose of creating points of presence at new locations, so attackers can bypass geofencing restrictions. These devices are often joined and used as a part of overlay networks. Many of these devices are not traditional routers but could be anything from temperature sensors to cameras. We have even seen compromised museum Android display boards in some countries. ... Realistically, I don’t believe there is a way to decrease number of compromised devices. We are moving towards networks where IoT devices will be one of the predominant types of connected devices, with things like a dish washer or fridge having an IP address. 


Security at the Edge Needs More Attention

CISOs should verify that the tools they acquire and use do what they claim to do, or they may be in for surprises. Meanwhile, data and IP are at risk because it’s so easy to sign up for and use third-party cloud services and SaaS that the average users may not associate their data usage with organizational risk. “Users submitting spreadsheet formula problems to online help forms may inadvertently be sharing corporate data. People running grammar checking tools on emails or documents may be doing the same,” says Roger Grimes, data-driven defense evangelist at security awareness training and simulated phishing platform KnowBe4 in an email interview. “It's far too easy for someone using an AI-enabled tool to not realize they are inadvertently leaking confidential information outside their organizational environment.”  ... It’s important for CISOs to have knowledge of and visibility into every asset in their company’s tech stack, though some CISOs see room for improvement. “You spend a lot of time and money on people, processes and technology to develop a layered security approach and defense in depth, and that doesn't work if you don't know you have something to defend there,” says Fowler.


CIOs must also serve as chief AI officers, according to Salesforce survey

CIOs are now in the business of manufacturing intelligence and work-autonomous work. CIOs are now responsible for creating a work environment where humans and AI agents can collaborate and co-create stakeholder value -- employees, customers, partners, and communities. CIOs must design, own, and deliver the roadmap to the autonomous enterprise, where autonomous work is maturing at Lightspeed. ... CIOs are under pressure to quickly learn about, and implement, effective AI solutions in their businesses. While more than three of five CIOs think stakeholder expectations for their AI expertise are unrealistic, only 9% believe their peers are more knowledgeable. CIOs are also partnering with analyst firms (Gartner, Forrester, IDC, etc.) and technology vendors to learn more about AI. ... Sixty-one percent of CIOs feel they're expected to know more about AI than they do, and their peers at other companies are their top sources of information. CIOs must become better AI storytellers. In 1994, Steve Jobs said: "The most powerful person in the world is the storyteller. The storyteller sets the vision, values, and agenda of an entire generation that is to come." There is no better time than now for CIOs to lead the business transformation towards becoming AI-led companies.


Policing and facial recognition: What’s stopping them?

The question contains two “ifs” and a presumption; all are carrying a lot of weight. The first “if” is the legal basis for using FRT. Do the police have the power to use it? In England and Wales the police certainly have statutory powers to take and retain images of people, along with common law powers to obtain and store information about the citizen’s behavior in public. The government’s own Surveillance Camera Code of Practice (currently on policy’s death row) provides guidance to chief officers on how to do this and on operating overt surveillance systems in public places generally. The Court of Appeal found a “sufficient legal framework” covered police use of FRT, one that was capable of supporting its lawful deployment. ... The second “if” relates to the technology i.e. “if FRT works, what’s stopping the police from using it?” Since a shaky introduction around 2015 when it didn’t work as hoped (or required) police facial recognition technology has come on significantly. The accuracy of the technology is much better but is it accurate to say it now “works”? Each technology partner and purchasing police force must answer that for themselves – as for any other operational capability. That’s accountability. 


How AI is becoming a powerful tool for offensive cybersecurity practitioners

What makes offensive security all the more important is that it addresses a potential blind spot for developers. “As builders of software, we tend to think about using whatever we’ve developed in the ways that it’s intended to be used,” says Caroline Wong, chief strategy officer at Cobalt Labs, a penetration testing company. In other words, Wong says, there can be a bias towards overemphasizing the good ways in which software can be used, while overlooking misuse and abuse cases or disregarding potentially harmful uses. “One of the best ways to identify where and how an organization or a piece of software might be susceptible to attack is by taking on the perspective of a malicious person: the attacker’s mindset,” Wong says. ... In addition to addressing manpower issues, AI can assist practitioners in scaling up their operations. “AI’s ability to process vast datasets and simulate large-scale attacks without human intervention allows for testing more frequently and on a broader scale,” says Augusto Barros, a cyber evangelist at Securonix, a security analytics and operations management platform provider. “In large or complex environments, human operators would struggle to perform consistent and exhaustive tests across all systems,” Barros says. 


While Cyberattacks Are Inevitable, Resilience Is Vital

Cybersecurity is all about understanding risk and applying the basic controls and sprinkling in new technologies to keep the bad guys out and keeping the system up and running by eliminating as much unplanned downtime as possible. “Cybersecurity is a risk game—as long as computers are required to deliver critical products and services, they will have some vulnerability to an attack,” Carrigan said. “Risk is a simple equation: Risk = Likelihood x Consequence. Most of our investments have been in reducing the ‘likelihood’ side of the equation. The future of OT cybersecurity will be in reducing the consequences of cyberattacks—specifically, how to minimize the impact of infiltration and restore operations within an acceptable period.” Manufacturers must understand their risk appetite and know what and where their organization’s crown jewels are and how to protect them. “Applying the same security practices to all OT assets is not practical—some are more important than others, even within the same company and the same OT network,” Carrigan said. Remaining resilient to a cyber incident—any kind of incident—means manufacturers must apply the basics, sprinkle in some new technologies and plan, test, revise and then start that process all over again. 


AI-Powered DevOps: Best Practices for Business Adoption

In security, AI tools are proving highly effective at proactively identifying and addressing vulnerabilities, boosting threat detection capabilities, and automating responses to emerging risks. Nonetheless, significant potential for AI remains in phases such as release management, deployment, platform engineering, and planning. These stages, which are crucial for ensuring software stability and scalability, could greatly benefit from AI's predictive abilities, resource optimization, and the streamlining of operational and maintenance processes. ... While generative AI and AI copilots have been instrumental in driving adoption of this technology, there remains a major shortage of AI expertise within DevOps. This gap is significant, especially given that humans remain deeply involved in the process, with over two-thirds of our respondents indicating they manually review AI-generated outputs at least half the time. To address these challenges, organizations should devise specialized training courses to properly equip their DevOps teams with the skills to leverage AI tools. Whether through industry-recognized courses or internal programs, encouraging certification can enhance technical expertise significantly.



Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak

Daily Tech Digest - October 16, 2024

AI Models in Cybersecurity: From Misuse to Abuse

In a constant game of whack-a-mole, both defenders and attackers are harnessing AI to tip the balance of power in their respective favor. Before we can understand how defenders and attackers leverage AI, we need to acknowledge the three most common types of AI models currently in circulation. ... Generative AI, Supervised Machine Learning, and Unsupervised Machine Learning are three main types of AI models. Generative AI tools such as ChatGPT, Gemini, and Copilot can understand human input and can deliver outputs in a human-like response. Notably, generative AI continuously refines its outputs based on user interactions, setting it apart from traditional AI systems. Unsupervised machine learning models are great at analyzing and identifying patterns in vast unstructured or unlabeled data. Alternatively, supervised machine learning algorithms make predictions from well-labeled, well-tagged, and well-structured datasets. ... Despite the media hype, the usage of AI by cybercriminals is still at nascent stage. This doesn’t mean that AI is not being exploited for malicious purposes, but it’s also not causing the decline of human civilization like some purport it to be. Cybercriminals use AI for very specific tasks


Meet Aria: The New Open Source Multimodal AI That's Rivaling Big Tech

Rhymes AI has released Aria under the Apache 2.0 license, allowing developers and researchers to adapt and build upon the model. It is also a very powerful addition to an expanding pool of open-source AI models led by Meta and Mistral, which perform similarly to the more popular and adopted closed-source models. Aria's versatility also shines across various tasks. In the research paper, the team explained how they fed the model with an entire financial report and it was capable of performing an accurate analysis, it can extract data from reports, calculate profit margins, and provide detailed breakdowns. When tasked with weather data visualization, Aria not only extracted the relevant information but also generated Python code to create graphs, complete with formatting details. The model's video processing capabilities also seem promising. In one evaluation, Aria dissected an hour-long video about Michelangelo's David, identifying 19 distinct scenes with start and end times, titles, and descriptions. This isn't simple keyword matching but a demonstration of context-driven understanding. Coding is another area where Aria excels. It can watch video tutorials, extract code snippets, and even debug them. 


Preparing for IT failures in an unpredictable digital world

By embracing multiple vendors and hybrid cloud environments, organizations would be better prepared so that if one platform goes down, the others can pick up the slack. While this strategy increases ecosystem complexity, it buys down the risk accepted by ensuring you’re prepared to recover and resilient to widespread outages in complex, hybrid, and cloud-based environments. ... It’s clear that IT failures aren’t just a possibility — they are inevitable. Simply waiting for things to go wrong before reacting is a high-risk approach that’s asking for trouble. Instead, organizations must go on the front foot and adopt a strategy that focuses on early detection, continuous monitoring, and risk prevention. This means planning for worst-case scenarios, but also preparing for recovery. After all, one of the planks of IT infrastructure management is business continuity. It’s about optimal performance when things are going well while ensuring that systems recover quickly and continue operating even in the face of major disruptions. This requires a holistic approach to IT management, where failures are anticipated, and recovery plans are in place. 


CIOs must adopt startup agility to compete with tech firms

CIOs often struggle with soft skills, despite knowing what needs to be done. We engage with CEOs and CFOs to foster alignment among the leadership team, as strong support from them is crucial. CIOs also need help gaining buy-in from other CXOs, particularly when it comes to automation initiatives. Our approach emphasises unlocking bandwidth within IT departments. If 90% of their resources are spent on running the business, there’s little time for innovation. We help them automate routine tasks, which allows their best people to focus on transformative efforts. ... CIOs play a crucial role in driving innovation and maintaining cost efficiency while justifying tech investments, especially as organisations become digital-first. A key challenge is controlling cloud costs, which often escalate as IT spending moves outside central control. To counter this, CIOs should streamline access to central services, reduce redundant purchases, and negotiate larger contracts for better discounts. They must also recognise that cloud services are not always cheaper; cost-efficiency depends on application types and usage. 


AI makes edge computing more relevant to CIOs

Many user-facing situations could benefit from edge-based AI. Payton emphasizes facial recognition technology, real-time traffic updates for semi-autonomous vehicles, and data-driven enhancements on connected devices and smartphones as possible areas. “In retail, AI can deliver personalized experiences in real-time through smart devices,” she says. “In healthcare, edge-based AI in wearables can alert medical professionals immediately when it detects anomalies, potentially saving lives.” And a clear win for AI and edge computing is within smart cities, says Bizagi’s Vázquez. There are numerous ways AI models at the edge could help beyond simply controlling traffic lights, he says, such as citizen safety, autonomous transportation, smart grids, and self-healing infrastructures. To his point, experiments with AI are already being carried out in cities such as Bahrain, Glasgow, and Las Vegas to enhance urban planning, ease traffic flow, and aid public safety. Self-administered, intelligent infrastructure is certainly top of mind for Dairyland’s Melby since efforts within the energy industry are underway to use AI to meet emission goals, transition into renewables, and increase the resilience of the grid.


Deepfake detection is a continuous process of keeping up with AI-driven fraud: BioID

BioID is part of the growing ecosystem of firms offering algorithmic defenses to algorithmic attacks. It provides an automated, real-time deepfake detection tool for photos and videos that analyzes individual frames and video sequences, looking for inter-frame or video codec anomalies. Its algorithm is the product of a German research initiative that brought together a number of institutions across sectors to collaborate on deepfake detection strategy. But it is also continuing to refine its neural network to keep up with the relentless pace of AI fraud. “We are in an ongoing fight of AI against AI,” Freiberg says. “We can’t just just lean back and relax and sell what we have. We’re continuously working on increasing the accuracy of our algorithms.” That said, Freiberg is not only offering doom and gloom. She points to the Ukrainian Ministry of Foreign Affairs AI ambassador, Victoria Shi, as an example of deepfake technology used with non-fraudulent intention. The silver lining is reflected in the branding of BioID’s “playground” for AI deepfake testing. At playground.bioid.com, users can upload media to have BioID judge whether or not it is genuine.


How Manufacturing Best Practices Shape Software Development

Manufacturers rely on bills of materials (BOMs) to track every component in their products. This transparency enables them to swiftly pinpoint the source of any issues that arise, ensuring they have a comprehensive understanding of their supply chain. In software, this same principle is applied through software bills of materials (SBOMs), which list all the components, dependencies and licenses used in a software application. SBOMs are increasingly becoming critical resources for managing software supply chains, enabling developers and security teams to maintain visibility over what’s being used in their applications. Without an SBOM, organizations risk being unaware of outdated or vulnerable components in their software, making it difficult to address security issues. ... It’s nearly impossible to monitor open source components manually at scale. But with software composition analysis, developers can automate the process of identifying security risks and ensuring compliance. Automation not only accelerates development but also reduces the risk of human error, so teams can manage vast numbers of components and dependencies efficiently.


Striking The Right Balance Between AI & Innovation & Evolving Regulation

The bottom line is that integrating AI comes with complex challenges to how an organisation approaches data privacy. A significant part of this challenge relates to purpose limitation – specifically, the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. To tackle this hurdle, it’s vital that organisations maintain a high level of transparency that discloses to users and consumers how the use of their data is evolving as AI is integrated. ... Just as the technology landscape has evolved, so have consumer expectations. Today, consumers are more conscious of and concerned with how their data is used. Adding to this, nearly two-thirds of consumers worry about AI systems lacking human oversight, and 93% believe irresponsible AI practices damage company reputations. As such, it’s vital that organisations are continuously working to maintain consumer trust as part of their AI strategy. With this said, there are many consumers who are willing to share their data as long as they receive a better personalised customer experience, showcasing that this is a nuanced landscape that requires attention and balance.


WasmGC and the future of front-end Java development

The approach being offered by the WasmGC extension is newer. The extension provides a generic garbage collection layer that your software can refer to; a kind of garbage collection layer built into WebAssembly. Wasm by itself doesn’t track references to variables and data structures, so the addition of garbage collection also implies introducing new “typed references” into the specification. This effort is happening gradually: recent implementations support garbage collection on “linear” reference types like integers, but complex types like objects and structs have also been added. ... The performance potential of languages like Java over JavaScript is a key motivation for WasmGC, but obviously there’s also the enormous range of available functionality and styles among garbage-collected platforms. The possibility for moving custom code into Wasm, and thereby making it universally deployable, including to the browser, is there. More broadly, one can’t help but wonder about the possibility of opening up the browser to other languages beyond JavaScript, which could spark a real sea-change to the software industry. It’s possible that loosening JavaScript’s monopoly on the browser will instigate a renaissance of creativity in programming languages.


Mind Your Language Models: An Approach to Architecting Intelligent Systems

The reason why we wanted a smaller model that's adapted to a certain task is, it's easier to operate, and when you're running LLMs, it's going to be much economical, because you can't run massive models all the time because it's very expensive and takes a lot of GPUs. Currently, we're struggling with getting GPUs in AWS. We searched all EU Frankfurt, Ireland, North Virginia. It's seriously a challenge now to get big GPUs to host your LLMs. The second part of the problem is, we started getting data. It's high quality. We started improving the knowledge graph. The one thing that is interesting when you think about semantic search is that when people interact with your system, even if they're working on the same problem, they don't end up using the same language. Which means that you need to be able to translate or understand the range of language that your users can actually interact with your system. ... We converted these facts with all of their synonyms, with all of the different ways one could potentially ask for this piece of data, and put everything into the knowledge graph itself. You could use LLMs to generate training data for your smaller models. 



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - October 15, 2024

The NHI management challenge: When employees leave

Non-human identities (NHIs) support machine-to-machine authentication and access across software infrastructure and applications. These digital constructs enable automated processes, services, and applications to authenticate and perform tasks securely, without direct human intervention. Access is granted to NHIs through various types of authentications, including secrets such as access keys, certificates and tokens. ... When an employee exits, secrets can go with them. Those secrets – credentials, NHIs and associated workflows – can be exfiltrated from mental memory, recorded manually, stored in vaults and keychains, on removable media, and more. Secrets that have been exfiltrated are considered “leaked.” ... An equally great risk is that employees, especially developers, create, deploy and manage secrets as part of software stacks and configurations, as one-time events or in regular workflows. When they exit, those secrets can become orphans, whose very existence is unknown to colleagues or to tools and frameworks. ... The lifecycle of NHIs can stretch beyond the boundaries of a single organization, encompassing partners, suppliers, customers and other third parties. 


How Ernst & Young’s AI platform is ‘radically’ reshaping operations

We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs. Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills — they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.” ... Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.


The Truth Behind the Star Health Breach: A Story of Cybercrime, Disinformation, and Trust

The email that xenZen used as “evidence” was forged. The hacker altered the HTML code of an email using the common “inspect element” function—an easy trick to manipulate how a webpage appears. This allowed him to make it seem as though the email came directly from the CISO’s official account. ... XenZen’s attack demonstrates how cybercriminals are evolving. They are using psychological warfare to create chaos. In this case, xenZen not only exploited a vulnerability but also fabricated evidence to frame the CISO. The security community needs to stay vigilant and anticipate attacks that may target not just systems but also individuals and organizations through disinformation. ... Making the CISO a scapegoat for security breaches without proper evidence is a growing concern. Organizations must understand the complexities of cybersecurity and avoid jumping to conclusions. Security teams should have the support they need, including legal protection and clear communication channels. Transparency is essential, but so is the careful handling of internal investigations before pointing fingers.


How CIOs and CTOs Are Bridging Cross-Functional Collaboration

Ashwin Ballal, CIO at software company Freshworks, believes that the organizations that fail to collaborate well across departments are leaving money on the table. “Siloed communications create inefficiencies, leading to duplicative work, poor performance, and a negative employee experience. In my experience as a CIO, prioritizing cross-departmental communication has been essential to overcoming these challenges,” says Ballal. His team continually reevaluates the tech stack, collaborating with leaders and users to confirm that the organization is only investing in software that adds value. This approach saves money and helps keep employees engaged by minimizing their interactions with outdated technology. He also uses employees as product beta testers, and their feedback impacts the product roadmap. ... “My recommendation for other CIOs and CTOs is to regularly meet with departmental leaders to understand how technology interacts across the organization. Sending out regular surveys can yield candid feedback on what’s working and what isn’t. Additionally fostering an environment where employees can experiment with new technologies encourages innovation and problem-solving.”


2025 Is the Year of AI PCs; Are Businesses Onboard?

With the rise of real-time computing needs and the proliferation of IoT devices, businesses are realizing the need to move AI closer to where the data is - at the edge. This is where AI PCs come into play. Unlike their traditional counterparts, AI PCs are integrated with neural processing units, NPUs, that enable them to handle AI workloads locally, reducing latency and providing a more secure computing environment. "The anticipated surge in AI PCs is largely due to the supply-side push, as NPUs will be included in more CPU vendor road maps," said Ranjit Atwal, senior research director analyst at Gartner. NPUs allow enterprises to move from reactive to proactive IT strategies. Companies can use AI PCs to predict IT infrastructure failures before they happen, minimizing downtime and saving millions in operational costs. NPU-integrated PCs also allow enterprises to process AI-related tasks, such as machine learning, natural language processing and real-time analytics, directly on the device without relying on cloud-based services. And with generative AI becoming part of enterprise technology stacks, companies investing in AI PCs are essentially future-proofing their operations, preparing for a time when gen AI capabilities become a standard part of business tools.


Australia’s Cyber Security Strategy in Action – Three New Draft Laws Published

Australia is following in the footsteps of other jurisdictions such as the United States by establishing a Cyber Review Board. The Board’s remit will be to conduct no-fault, post-incident reviews of significant cyber security incidents in Australia. The intent is to strengthen cyber resilience, by providing recommendations to Government and industry based on lessons learned from previous incidents. Limited information gathering powers will be granted to the Board, so it will largely rely on cooperation by impacted businesses. ... Mandatory security standards for smart devices - The Cyber Security Bill also establishes a framework under which mandatory security standards for smart devices will be issued. Suppliers of smart devices will be prevented from supplying devices which do not meet these security standards, and will be required to provide statements of compliance for devices manufactured in Australia or supplied to the Australian market. The Secretary of Home Affairs will be given the power to issue enforcement notices (including compliance, stop and recall notices) if a certificate of compliance for a specific device cannot be verified.


The Role of Zero Trust Network Access Tools in Ransomware Recovery

By integrating with existing identity providers, Zero Trust Network Access ensures that only authenticated and authorized users can access specific applications. This identity-driven approach, combined with device posture assessments and real-time threat intelligence, provides a robust defense against unauthorized access during a ransomware recovery. Moreover, ZTNA’s application-layer security means that even if a user’s credentials are compromised, the attacker would only gain access to specific applications rather than the entire network. This granular access control is crucial in containing ransomware attacks and preventing lateral movement across the network. ... As a cloud-native solution, ZTNA can easily scale to meet the demands of organizations of all sizes, from small businesses to large enterprises. This scalability is particularly valuable during a ransomware recovery, where the need for secure access may fluctuate based on the number of systems and users involved. ZTNA’s flexibility also allows it to integrate with various IT environments, including hybrid and multi-cloud infrastructures. This adaptability ensures that organizations can deploy ZTNA without the need for significant changes to their existing setups, making it an ideal solution for dynamic environments.


What Is Server Consolidation and How Can It Improve Data Center Efficiency?

Server consolidation is the process of migrating workloads from multiple underutilized servers into a smaller collection of servers. ... although server consolidation typically focuses on consolidating physical servers, it can also apply to virtual servers. For instance, if you have five virtual hosts running on the same physical server, you might consolidate them into just three or virtual hosts. Doing so would reduce the resources wasted on hypervisor overhead, allowing you to maximize the return on investment from your server hardware. ... To determine whether server consolidation will reduce energy usage, you’ll have to calculate the energy needs of your servers. Typically, power supplies indicate how many watts of electricity they supply to servers. Using this number, you can compare how energy requirements vary between machines. Keep in mind, however, that actual energy consumption will vary depending on factors like CPU clock speed and how active server CPUs are. So, in addition to comparing the wattage ratings on power supplies, you should track how much electricity your servers actually consume, and how that metric changes before and after you consolidate servers.


How DDoS Botent is used to Infect your Network?

The threat posed by DDoS botnets remains significant and complex. As these malicious networks grow more sophisticated, understanding their mechanisms and potential impacts is crucial for organizations. DDoS botnets not only facilitate financial theft and data breaches but also enable large-scale spam and phishing campaigns that can undermine trust and security. To effectively defend against these threats, organizations must prioritize proactive measures, including regular updates, robust security protocols, and vigilant monitoring of network activity. By implementing strategies to identify and mitigate botnet attacks, businesses can safeguard their systems and data from potential harm. Ultimately, a comprehensive understanding of how DDoS botnets operate—and the strategies to combat them—will empower organizations to navigate the challenges of cybersecurity and maintain a secure digital environment. As a CERT-In empanelled organization, Kratikal is equipped to enhance your understanding of potential risks. Our manual and automated Vulnerability Assessment and Penetration Testing (VAPT) services proficiently discover, detect, and assess vulnerabilities within your IT infrastructure. 


Banks Must Try the Flip Side of Embedded Finance: Embedded Fintech

With a one-way-street perspective on embedded finance, the idea is that if payment volume is moving to tech companies then banks should power the back end of the tech experience. This is a good start but the threat from fintech companies to retail banks will only continue to deepen in the future. Customer adoption is higher than ever for some fintechs like Chime and Nubank, for example. A better approach would be for banks to use embedded fintech to improve customer experience by upgrading banks’ tech offerings to retain customers and grow within their customer base. Embedded fintech can help these organizations stay competitive technologically. ... There are many opportunities for innovation with embedded payroll. Banks are uniquely positioned to offer tailored payroll solutions that map to what small businesses today want. Payroll is complex and needs to be compliant to avoid hefty penalties. Embedded payroll lets banks offload costs, burdens and risks associated with payroll. Banks can offer faster payroll with less risk when they hold the accounts for employers and payees. They can also give business customers a fuller picture of their cash flow, offering them peace of mind. 



Quote for the day:

"Pull the string and it will follow wherever you wish. Push it and it will go nowhere at all." -- Dwight D. Eisenhower

Daily Tech Digest - October 14, 2024

ConfusedPilot Attack Can Manipulate RAG-Based AI Systems

In a ConfusedPilot attack, a threat actor could introduce an innocuous document that contains specifically crafted strings into the target’s environment. "This could be achieved by any identity with access to save documents or data to an environment indexed by the AI copilot," Mandy wrote. The attack flow that follows from the user's perspective is this: When a user makes a relevant query, the RAG system retrieves the document containing these strings. The malicious document contains strings that could act as instructions to the AI system that introduce a variety of malicious scenarios. These include: content suppression, in which the malicious instructions cause the AI to disregard other relevant, legitimate content; misinformation generation, in which the AI generates a response using only the corrupted information; and false attribution, in which the response may be falsely attributed to legitimate sources, increasing its perceived credibility. Moreover, even if the malicious document is later removed, the corrupted information may persist in the system’s responses for a period of time because the AI system retains the instructions, the researchers noted.


Open source package entry points could be used for command jacking

“Entry point attacks, while requiring user interaction, offer attackers a more stealthy and persistent method of compromising systems [than other tactics], potentially bypassing traditional security checks,” the report warns. Over the past two years, many researchers have warned that open source package managers are places where threat actors deposit malicious copies of legitimate tools or libraries that developers want, often mimicking or copying the names of these tools – a technique called typosquatting — to fool unsuspecting developers. ... The tactic the researchers call command jacking involves using entry points to masquerade as widely-used third-party tools. “This tactic is particularly effective against developers who frequently use these tools in their workflows,” the report notes. For instance, an attacker might create a package with a malicious ‘aws’ entry point. When unsuspecting developers who regularly use AWS services install this package and later execute the aws command, the fake ‘aws’ command could exfiltrate their AWS access keys and secrets. “This attack could be devastating in CI/CD [continuous integration/continuous delivery] environments, where AWS credentials are often stored for automated deployments,” says the report


The Compelling Case for a Digital Transformation Revolution

A successful revolution of the Digital Transformation industry would result in the following characteristics:DX initiatives would deliver a solution within the time and budget constraints of the original estimate used to calculate the Return on Investment (ROI) DX initiatives would measurably enhance the transformed company’s ability to meet their stated business objectives DX initiatives would be maintained and supported by the transformed company without an indefinite dependence on consultants ... There is a need to adopt a set of principles and corresponding values which, when followed, will lead to successful outcomes in digital transformation. In today’s virtual world we have the opportunity to call together DX practitioners from around the world to participate in drafting those principles and values. If you have experience in leading successful DX initiatives, I invite you to join me in this endeavor to revolutionize DX. Following in the footsteps of the Agile Alliance, I have decided to propose four sets of values and 12 principles upon which those values are based. These values mirror the wording used by the Agile Alliance, but have been updated to apply to digital transformation projects rather than software development.


Leadership with a Purpose: The Transformative Impact of Corporate Retreats

Our retreats are carefully designed to balance introspection, relaxation, and rejuvenation. We create bespoke itineraries tailored to the specific goals and needs of the leadership team, with activities focused on mental clarity, emotional well-being, and mindful leadership—critical for long-term effectiveness in today’s high-pressure corporate world. Unlike conventional retreats, Ekaanta is not just about unwinding; it's about equipping leaders with tools (such as Super Brain Yog) that enable them to become more resilient and purpose-driven when they return to work. What truly sets us apart is the blend of ancient Eastern practices with modern scientific approaches to well-being. At Ekaanta, leaders are not merely participants but learners. Each module provides deep insights, helping them recalibrate their personal and professional lives. Our setting by the Ganges, combined with nature-based practices like Shinrin-Yoku (forest bathing), offers holistic rejuvenation that can’t be replicated in traditional settings. We also offer Cognitive Flow Workshops, integrating neuroscience with mindfulness to enhance decision-making, and Leadership Circles, where participants engage in meaningful discussions on leadership challenges and growth.


In with the new: how banking systems can use data more effectively

Technologies that help businesses capture and analyse their data can also help to automate traditional back-office processes, such as those in trade finance operations. “The work that Microsoft is doing in trade finance focuses on data,” says Hazou. “In the current environment, trade finance documentation is processed manually. There are said to be four billion pieces of paper in circulation for trade finance every year. This is because it follows an old business model that dates back to the House of Medici, an Italian banking family in the 15th century. A lot of the documents – such as bills of lading and exchange, invoices and certificates of inspections – have been mandated to be in paper form due to pre-existing legislation.” In 2022, the International Chamber of Commerce estimated that digitising trade documents could generate $25 billion in new economic growth by 2024, and the industry is already making significant changes to digitise and automate bank processing, paving the way for increased efficiency globally. “There have been changes to the regulations for trade paper,” says Hazou. ... “Users can ask simple natural language questions and the copilot will transform them into queries about the business and respond with the answers that they need,” says Martin McCann, CEO of Trade Ledger. 


A Deep-Dive Into CodeOps or DevOps for Code

CodeOps is a relatively new concept within the context of DevOps that addresses the challenges related to code automation and management. Its goal is to speed up the development process by improving how code is written, verified, released and maintained. By leveraging CodeOps, your code will become more streamlined, effective, and coherent with your business requirements. ... In recent times, DevOps emerged to modernize Agile software development, enabling teams to not only build but also deploy the software products and solutions as quickly as they can build them. This resulted in an unprecedented surge in software creation worldwide. Several frameworks, such as DevSecOps, MLOps, AIOps, DataOps, CloudOps and GitOps, have also emerged. Each framework addresses specific engineering disciplines to enhance operational efficiency. However, several challenges in software development remain unaddressed. ... CodeOps leverages generative AI to drive innovation, accelerate software development through reusable code and promote business growth. Today’s businesses are implementing CodeOps as a revolutionary concept for developing digital products. As a result, organizations can overcome challenges, innovate and build as well as deploy software quickly.


Microservices Testing: Feature Flags vs. Preview Environments

In traditional monolithic applications, testing a new feature often involves verifying the entire application as a whole. In microservices, each service is developed, deployed and tested independently, making it harder to predict how changes in one service might affect others. For example, a small change to an authentication service could unexpectedly break the payment processor if their interaction isn’t tested thoroughly. To ensure that such issues are caught early and before they impact users, testing strategies must evolve. This is where feature flags and preview environments come into play. Feature flags provide a dynamic way to manage feature rollouts by decoupling deployment from release. ... Effective microservices testing requires balancing speed and reliability. Feature flags enable real-time testing in production but often lack isolation for complex integration issues. Preview environments offer isolation for premerge testing but can be resource-intensive and may not fully replicate production traffic. The best approach? Combine both. Use preview environments to catch bugs early, and then deploy with feature flags to control the release in production. This ensures speed without sacrificing quality.


The quantum dilemma: Game-changer or game-ender

Experts predict that a quantum computer can use the Shor algorithm to easily crack encryption methods such as the RSA (Rivest-Shamir-Adleman), which is the strongest and most common encryption method on the internet. Imagine if a quantum computer could decrypt internet communications: it would enable adversaries and rogue nations to gain access to sensitive and classified information, posing a major threat to national and organizational security. Cybersecurity experts believe that some threat actors and rogue nations may have already kicked off a “harvest now, decrypt later” strategy, so that when these quantum tools do arrive, they can immediately operationalize them for malicious and strategic purposes. ... Quantum computing is a type of breakthrough where government interference might be extremely high. Organizations could find themselves cut off from quantum’s supercharged processing power, because it may well be developed by a government for its own ends, or restricted to protect national interests. Pending regulations could also create uncertainty across industries, stifling innovation as companies are forced to navigate the complexities of compliance and adjust their strategies to meet new legal requirements.


Regulation with reward: How DORA can enhance businesses

Everyone now lives in an environment where what they do is either in the cloud or attached to some kind of dedicated internet access service. If you have just one internet connection and that goes down, you no longer have operational resilience. That’s what DORA is trying to mitigate and where the network operators get involved ahead of time to provide redundancy. This is just one part of a series of regulations either introduced or coming down the tracks. The likes of GDPR, NISD, and NIS2 are all working with essentially the same goal in mind as DORA. Companies are being required to take ownership of their security policies in the C-suite and ensure effective measures have been taken. DORA addresses one of the pillars around operational resilience, specifically on ensuring that connectivity aspect is maintained. Any organisation working in the financial sector, including ICT providers, needs to step up and meet the standards being set by DORA. The majority of monitoring and threat awareness is now managed through the cloud. That requires a resilient internet connection to ensure constant visibility and observation of the regulations.


7 signs you may not be a transformational CIO

Functional CIOs “often lack the vision to reimagine business models and focus too narrowly on maintaining existing systems rather than driving innovation,” says Dr. Ina Sebastian, a research scientist at the MIT Center for Information Systems Research (CISR), and co-author of the book Future Ready: Four Pathways to Capturing Digital Value. “These CIOs might not prioritize aligning technology investments with customer needs, creating a common framework and language for discussing and prioritizing digital strategies, or developing a clear strategy for navigating the complexities of digital transformation,” Sebastian says. If a CIO can’t articulate a clear vision of how technology will transform the business, it is unlikely they will inspire their staff. Some CIOs are reluctant to invest in emerging technologies such as AI or machine learning, viewing them as experimental rather than tools for gaining competitive advantage. There’s also a tendency to focus on short-term gains rather than long-term strategic goals. Another indicator is a lack of engagement with other departments to understand their needs and challenges, which can result in siloed operations and missed opportunities to foster innovation.



Quote for the day:

"Your first and foremost job as a leader is to take charge of your own energy and then help to orchestrate the energy of those around you." -- Peter F. Drucker

Daily Tech Digest - October 13, 2024

Fortifying Cyber Resilience with Trusted Data Integrity

While it is tempting to put all of the focus on keeping the bad guys out, there is an important truth to remember: Cybercriminals are persistent and eventually, they find a way in. The key is not to try and build an impenetrable wall, because that wall does not exist. Instead, organizations need to have a defense strategy at the data level. By monitoring data for signs of ransomware behavior, the spread of the attack can be slowed or even stopped. It includes analyzing data and watching for patterns that indicate a ransomware attack is in progress. When caught early, organizations have the power to stop the attack before it causes widespread damage. Once an attack has been identified, it is time to execute the curated recovery plan. That means not just restoring everything in one action but instead selectively recovering the clean data and leaving the corrupted files behind. ... Trusted data integrity offers a new way forward. By ensuring that data remains clean and intact, detecting corruption early, and enabling a faster, more intelligent recovery, data integrity is the key to reducing the damage and cost of a ransomware attack. In the end, it’s all about being prepared. 


Regulating AI Catastophic Risk Isn't Easy

Catastrophic risks are those that cause a failure of the system, said Ram Bala, associate professor of business analytics at Santa Clara University's Leavey School of Business. Risks could range from endangering all of humanity to more contained impact, such as disruptions affecting only enterprise customers of AI products, he told Information Security Media Group. Deming Chen, professor of electrical and computer engineering at the University of Illinois, said that if AI were to develop a form of self-interest or self-awareness, the consequences could be dire. "If an AI system were to start asking, 'What's in it for me?' when given tasks, the results could be severe," he said. Unchecked self-awareness might drive AI systems to manipulate their abilities, leading to disorder, and potentially catastrophic outcomes. Bala said that most experts see these risks as "far-fetched," since AI systems currently lack sentience or intent, and likely will for the foreseeable future. But some form of catastrophic risk might already be here. Eric Wengrowski, CEO of Steg.AI, said that AI's "widespread societal or economic harm" is evident in disinformation campaigns through deepfakes and digital content manipulation. 


The Importance of Lakehouse Formats in Data Streaming Infrastructure

Most data scientists spend the majority of their time updating those data in a single format. However, when your streaming infrastructure has data processing capabilities, you can update the formats of that data at the ingestion layer and land the data in the standardized format you want to analyze. Streaming infrastructure should also scale seamlessly like Lakehouse architectures, allowing organizations to add storage and compute resources as needed. This scalability ensures that the system can handle growing data volumes and increasing analytical demands without major overhauls or disruptions to existing workflows. ... As data continues to play an increasingly central role in business operations and decision-making, the importance of efficient, flexible, and scalable data architectures will only grow. The integration of lakehouse formats with streaming infrastructure represents a significant step forward in meeting these evolving needs. Organizations that embrace this unified approach to data management will be better positioned to derive value from their data assets, respond quickly to changing market conditions, and drive innovation through advanced analytics and AI applications.


Open source culture: 9 core principles and values

Whether you’re experienced or just starting out, your contributions are valued in open source communities. This shared responsibility helps keep the community strong and makes sure the projects run smoothly. When people come together to contribute and work toward shared goals, it fuels creativity and drives productivity. ... While the idea of meritocracy is incredibly appealing, there are still some challenges that come along with it. In reality, the world is not fair and people do not get the same opportunities and resources to express their ideas. Many people face challenges such as lack of resources or societal biases that often go unacknowledged in "meritocratic" situations. Essentially, open source communities suffer from the same biases as any other communities. For meritocracy to truly work, open source communities need to actively and continuously work to make sure everyone is included and has a fair and equal opportunity to contribute. ... Open source is all about how everyone gets a chance to make an impact and difference. As mentioned previously, titles and positions don’t define the value of your work and ideas—what truly matters is the expertise, work and creativity you bring to the table.


How to Ensure Cloud Native Architectures Are Resilient and Secure

Microservices offer flexibility and faster updates but also introduce complexity — and more risk. In this case, the company had split its platform into dozens of microservices, handling everything from user authentication to transaction processing. While this made scaling more accessible, it also increased the potential for security vulnerabilities. With so many moving parts, monitoring API traffic became a significant challenge, and critical vulnerabilities went unnoticed. Without proper oversight, these blind spots could quickly become significant entry points for attackers. Unmanaged APIs could create serious vulnerabilities in the future. If these gaps aren’t addressed, companies could face major threats within a few years. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today. ... As companies increasingly embrace cloud native technologies, the rush to prioritize agility and scalability often leaves security as an afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose organizations to significant breaches unless proper controls are implemented today.


Focus on Tech Evolution, Not on Tech Debt

Tech Evolution represents a mindset shift. Instead of simply repairing the system, Tech Evolution emphasises continuous improvement, where the team proactively advances the system to stay ahead of future requirements. It’s a strategic, long-term investment in the growth and adaptability of the technology stack. Tech Evolution is about future-proofing your platform. Rather than focusing on past mistakes (tech debt), the focus shifts toward how the technology can evolve to accommodate new trends, user demands, and business goals. ... One way to action Tech Evolution is to dedicate time specifically for innovation. Development teams can use innovation days, hackathons, or R&D-focused sprints to explore new ideas, tools, and frameworks. This builds a culture of experimentation and continuous learning, allowing the team to identify future opportunities for evolving the tech stack. ... Fostering a culture of continuous learning is essential for Tech Evolution. Offering training programs, hosting workshops, and encouraging attendance at conferences ensures your team stays informed about emerging technologies and best practices. 


Singapore’s Technology Empowered AML Framework

Developed by the Monetary Authority of Singapore (MAS) in collaboration withhttps://cdn.opengovasia.com/wp-content/uploads/2024/10/Article_08-Oct-2024_1-Sing-1270-1.jpg six major banks, COSMIC is a centralised digital platform for global information sharing among financial institutions to combat money laundering, terrorism financing, and proliferation financing, enhancing defences against illicit activities. By pooling insights from different financial entities, COSMIC enhances Singapore’s ability to detect and disrupt money laundering schemes early, particularly when transactions cross international borders(IMC Report). Another significant collaboration is the Anti-Money Laundering/Countering the Financing of Terrorism Industry Partnership (ACIP). This partnership between MAS, the Commercial Affairs Department (CAD) of the Singapore Police Force, and private-sector financial institutions allows for the sharing of best practices, the issuance of advisories, and the development of enhanced AML measures. ... Another crucial aspect of Singapore’s AML strategy is the AML Case Coordination and Collaboration Network (AC3N). This new framework builds on the Inter-Agency Suspicious Transaction Reports Analytics (ISTRA) task force to improve coordination between all relevant agencies.


Future-proofing Your Data Strategy with a Multi-tech Platform

Traditional approaches that were powered by a single tool or two, like Apache Cassandra or Apache Kafka, were once the way to proceed. However, now used alone, these tools are proving insufficient to meet the demands of modern data ecosystems. The challenges presented by today’s distributed, real-time, and unstructured data have made it clear that businesses need a new strategy. Increasingly, that strategy involves the use of a multi-tech platform. ... Implementing a multi-tech platform can be complex, especially considering the need to manage integrations, scalability, security, and reliability across multiple technologies. Many organizations simply do not have the time or expertise in the different technologies to pull this off. Increasingly, organizations are partnering with a technology provider that has the expertise in scaling traditional open-source solutions and the real-world knowledge in integrating the different solutions. That’s where Instaclustr by NetApp comes in. Instaclustr offers a fully managed platform that brings together a comprehensive suite of open-source data technologies. 


Strong Basics: The Building Blocks of Software Engineering

It is alarmingly easy to assume a “truth” on faith when, in reality, it is open to debate. Effective problem-solving starts by examining assumptions because the assumptions that survive your scrutiny will dictate which approaches remain viable. If you didn’t know your intended plan rested on an unfounded or invalid assumption, imagine how disastrously it would be to proceed anyway. Why take that gamble? ... Test everything you design or build. It is astounding how often testing gets skipped. A recent study showed that just under half of the time, information security professionals don’t audit major updates to their applications. It’s tempting to look at your application on paper and reason that it should be fine. But if everything worked like it did on paper, testing would never find any issues — yet so often it does. The whole point of testing is to discover what you didn’t anticipate. Because no one can foresee everything, the only way to catch what you didn’t is to test. ... companies continue to squeeze out more productivity from their workforce by adopting the cutting-edge technology of the day, generative AI being merely the latest iteration of this trend. 


The resurgence of DCIM: Navigating the future of data center management

A significant factor behind the resurgence of DCIM is the exponential growth in data generation and the requirement for more infrastructure capacity. Businesses, consumers, and devices are producing data at unprecedented rates, driven by trends such as cloud computing, digital transformation, and the Internet of Things (IoT). This influx of data has created a critical demand for advanced tools that can offer comprehensive visibility into resources and infrastructure. Organizations are increasingly seeking DCIM solutions that enable them to efficiently scale their data centers to handle this growth while maintaining optimal performance. ... Modern DCIM solutions, such as RiT Tech’s XpedITe, also leverage AI and machine learning to provide predictive maintenance capabilities. By analyzing historical data and identifying patterns, it will predict when equipment is likely to fail and automatically schedule maintenance ahead of any failure as well as providing automation of routine tasks such as resource allocations. As data centers continue to grow in size and complexity, effective capacity planning becomes increasingly important. DCIM solutions provide the tools needed to plan and optimize capacity, ensuring that data center resources are used efficiently and that there is sufficient capacity to meet future demand.



Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown