Daily Tech Digest - October 19, 2024

DevSecOps: Building security into each stage of development

While it is important and becoming invaluable, it’s difficult to know how well open-source code has been maintained, Faus noted. A developer might incorporate third-party code and inadvertently introduce a vulnerability. DevSecOps allows security teams to flag that vulnerability and work with the development team to identify whether the code should be written differently or if the vulnerability is even dangerous. Ultimately, all parties can assure that they did everything they could to produce the most secure code possible. In both DevOps and DevSecOps, “the two primary principles are collaboration and transparency,” Faus said. Another core tenet is automation, which creates repeatability and reuse. If a developer knows how to resolve a specific vulnerability, they can reuse it across every other project with that same vulnerability. ... One of the biggest challenges in implementing security throughout the development cycle is the legacy mindset in how security is treated, Faus pointed out. Organizations must be willing to embrace cultural change and be open, transparent, and collaborative about fixing security issues. Another challenge lies in building in the right type of automation. “One of the first things is to make security a requirement for every new project,” Faus said.


Curb Your Hallucination: Open Source Vector Search for AI

Vector search—especially implementing a RAG approach utilizing vector data stores—is a stark alternative. Instead of relying on a traditional search engine approach, vector search uses the numerical embeddings of vectors to resolve queries. Therefore, searches examine a limited data set of more contextually relevant data. The results include improved performance, earned by efficiently utilizing massive data sets, and greatly decreased risk of AI hallucinations. At the same time, the more accurate answers that AI applications provide when backed by vector search enhance the outcomes and value delivered by those solutions. Combining both vector and traditional search methods into hybrid queries will give you the best of both worlds. Hybrid search ensures you cover all semantically related context, and traditional search can provide the specificity required for critical components ... Several open source technologies offer an easy on-ramp to building vector search capabilities and a path free from proprietary expenses, inflexibility, and vendor lock-in risks. To offer specific examples, Apache Cassandra 5.0, PostgreSQL (with pgvector), and OpenSearch are all open source data technologies that now offer enterprise-ready vector search capabilities and underlying data infrastructure well-suited for AI projects at scale.


Driving Serverless Productivity: More Responsibility for Developers

First, there are proactive controls, which prevent deployment of non-compliant resources by instilling best practices from the get-go. Second, detective controls, which identify violations that are already deployed, and then provide remediation steps. It’s important to recognize these controls must not be static. They need to evolve over time, just as your organization, processes, and production environments evolve. Think of them as checks that place more responsibility on developers to meet high standards, and also make it far easier for them to do so. Going further, a key -- and often overlooked -- part of any governance approach is its notification and supporting messaging system. As your policies mature over time, it is vitally important to have a sense of lineage. If we’re pushing for developers to take on more responsibility, and we’ve established that the controls are constantly evolving and changing, notifications cannot feel arbitrary or unsupported. Developers need to be able to understand the source of the standard driving the control and the symptoms of what they’re observing.


Mastering Observability: Unlocking Customer Insights

If we do something and the behaviour of our users changes in a negative way, if they start doing things slower, less efficiently, then we're not delivering value to the market. We're actually damaging the value we're delivering to the market. We're disrupting our users' flows. So a really good way to think about whether we are creating value or not is how is the behavior of our users, of our stakeholders or our customers changing as a result of us shipping things out? And this kind of behavior change is interesting because it is a measurement to whether we are solving the problem, not whether we're delivering a solution. And from that perspective, I can then offer five different solutions for the same behavior change. I can say, "Well, if that's the behavior change we want to create, this thing you proposed is going to cost five men millennia to make, but I can do it with a shell script and it's going to be done tomorrow. Or we can do it with an Excel export or we can do it with a PDF or we can do it through a mobile website not building a completely new app". And all of these things can address the same behavior change.


AI-generated code is causing major security headaches and slowing down development processes

The main priorities for DevSecOps in terms of security testing were the sensitivity of the information being handled, industry best practice, and easing the complexity of testing configuration through automation, all cited by around a third. Most survey respondents (85%) said they had at least some measures in place to address the challenges posed by AI-generated code, such as potential IP, copyright, and license issues that an AI tool may introduce into proprietary software. However, fewer than a quarter said they were ‘very confident' in their policies and processes for testing this code. ... The big conflict here appears to be security versus speed considerations, with around six-in-ten reporting that security testing significantly slows development. Half of respondents also said that most projects are still being added manually. Another major hurdle for teams is the dizzying number of security tools in use, the study noted. More than eight-in-ten organizations said they're using between six and 20 different security testing tools. This growing array of tools makes it harder to integrate and correlate results across platforms and pipelines, respondents noted, and is making it harder to distinguish between genuine issues and false positives.


How digital twins are revolutionising real-time decision making across industries

Despite the promise of digital twins, Bhonsle acknowledges that there are challenges to adoption. “Creating and maintaining a digital twin requires substantial investments in infrastructure, including sensors, IoT devices, and AI capabilities,” he points out. Security is another concern, particularly in industries like healthcare and energy, where compromised data streams could lead to life-threatening consequences. However, Bhonsle emphasises that the rewards far outweigh the risks. “As digital twin technology matures, it will become more accessible, even to smaller organisations, offering them a competitive edge through optimised operations and data-driven decisions.” ... Digital twins are transforming how businesses operate by providing real-time insights that drive smarter decisions. From manufacturing floors to operating rooms, and from energy grids to smart cities, this technology is reshaping industries in unprecedented ways. As Bhonsle aptly puts it, “The rise of digital twins signals a new era of efficiency and agility—an era where decisions are no longer based on assumptions but driven by data in real time.” As organisations embrace this evolving technology, they unlock new opportunities to optimise performance and stay ahead in a fast-changing world.


AI and tech in banking: Half the industry lags behind

Gupta emphasised that a superficial approach to digitalisation—what he called “putting lipstick on a pig”—is common in many institutions. These banks often adopt digital tools without rethinking the processes behind them, resulting in inefficiencies and missed opportunities for transformation. In addition, the culture of risk aversion in many financial institutions makes them slow to experiment with new technologies. According to a Deloitte survey, 62% of banking executives cited corporate culture as the biggest barrier to successful digital transformation. A fear of regulatory hurdles and data privacy issues also compounds this reluctance to fully embrace AI. ... The rise of fintech companies is also reshaping the financial landscape. Digital-first challengers like Revolut and Monzo are making waves by offering streamlined, customer-centric services that appeal to tech-savvy users. These companies, unencumbered by legacy systems, have been able to rapidly adopt AI, providing highly personalised products and services through their digital platforms. The UK fintech sector alone saw record investment in 2021, with $11.6 billion pouring into the industry, according to Innovate Finance. This influx of capital has enabled fintech firms to invest in AI technologies, providing stiff competition to traditional banks that are slower to adapt. 


The Era Of Core Banking Transformation Trade-offs Is Finally Over

There must be a better way than forcing banks to choose their compromises. Banking today needs a next-generation solution that blends the best of configuration neo cores – speed to market, lower cost, derisked migration – and combines it with the benefits of framework neo cores – full customization of products and even the core, with massive scale built in as standard. If banks and financial services aren’t forced to compromise because of their choice of cloud-native core solution, they can accelerate their innovation. Our research reveals that, while AI remains front of mind for many IT decision makers in banking and financial services, only one in three (32%) have so far integrated AI into their core solution. This is concerning. According to (McKinsey), banking is one of the top four sectors set to capitalize on that market opportunity – but that forecast will remain a pipe dream if banks can’t integrate AI efficiently, securely and at massive scale. ... One thing is certain, whether configuration or framework, neo cores are not the final destination for banking. They have been a helpful stepping stone over the last decade to cloud-native technology, but banks and financial services now need a next-generation core technology that doesn’t demand so many compromises. 


10 Risks of IoT Data Management

IoT data management faces significant security risks due to the large attack surface created by interconnected devices. Each device presents a potential entry point for cyberattacks, including data breaches and malware injections. Attackers may exploit vulnerabilities in device firmware, weak authentication methods, or unsecured network protocols. To mitigate these risks, implementing end-to-end encryption, device authentication, and secure communication channels is essential. ... IoT devices often collect sensitive personal information, which raises concerns about user privacy. The lack of transparency in how data is collected, processed, and shared can erode user trust, especially when data is shared with third parties without explicit consent. Addressing privacy concerns requires the anonymization and pseudonymization of personal data. ... The massive influx of data generated by IoT devices can overwhelm traditional storage systems, leading to data overload. Managing this data efficiently is a challenge, as continuous data generation requires significant storage capacity and processing power. To solve this, organizations can adopt edge computing, which processes data closer to the source, reducing the need for centralized storage. 


Managing bank IT spending: Five questions for tech leaders

The demand for development resources and the need to manage tech debt are only expected to increase. Tech talent has never been cheap, and inflation is pushing up salaries. Cybersecurity threats are also becoming more urgent, demanding greater funds to address them. And figuring out how to integrate generative AI takes time, personnel, and money. Despite these competing priorities and challenges, bank IT leaders have an opportunity to make their mark on their organizations and position themselves as central to their success—if they can address some key problems. ... In our experience, IT leaders should never underestimate the importance of controlling and reducing tech debt whenever possible. Actions to correct course could include conducting frequent assessments to determine which areas are accumulating tech debt and developing plans to reduce it as much as possible. More than many other industries, banking is a hotbed of new app development. Leaders who address these key questions can ensure they are directing their talent and resources to game-changing app development that directly contributes to their bank’s bottom line.



Quote for the day:

“It's failure that gives you the proper perspective on success.” -- Ellen DeGeneres

Daily Tech Digest - October 18, 2024

Breaking Barriers: The Power of Cross-Departmental Collaboration in Modern Business

In an era of rapid change and increasing complexity, cross-departmental collaboration is no longer a luxury but a necessity. By dismantling silos, fostering trust, and leveraging technology, organizations can unlock their full potential, drive innovation, and enhance customer satisfaction. While industry leaders have shown the way, the journey to a truly collaborative culture requires sustained effort and adaptation. To embark on this collaborative journey, organizations must prioritize collaboration as a core value, invest in leadership development, empower employees, leverage technology, and measure progress. Creating a collaborative culture is like building a bridge between departments: it requires strong foundations, continuous maintenance, and a shared vision. By doing so, they can create a culture where innovation thrives, employees are engaged, and customers benefit from improved products and services. Looking ahead, successful organizations will not only embrace collaboration but also anticipate its evolution in response to emerging trends like remote work, artificial intelligence, and data privacy. By proactively addressing these challenges and opportunities, businesses can position themselves as leaders in the collaborative economy.


Singapore releases guidelines for securing AI systems and prohibiting deepfakes in elections

"AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system," said Singapore's Cyber Security Agency (CSA). "The adoption of AI can also exacerbate existing cybersecurity risks to enterprise systems, [which] can lead to risks such as data breaches or result in harmful, or otherwise undesired model outcomes." "As such, AI should be secure by design and secure by default, as with all software systems," the government agency said. ... "The Bill is scoped to address the most harmful types of content in the context of elections, which is content that misleads or deceives the public about a candidate, through a false representation of his speech or actions, that is realistic enough to be reasonably believed by some members of the public," Teo said. "The condition of being realistic will be objectively assessed. There is no one-size-fits-all set of criteria, but some general points can be made." These encompass content that "closely match[es]" the candidates' known features, expressions, and mannerisms, she explained. The content also may use actual persons, events, and places, so it appears more believable, she added.


2025 and Beyond: CIOs' Guide to Stay Ahead of Challenges

As enterprises move beyond the "experiment" or the "proof of concept" stage, it is time to design and formalize a well-thought-out AI strategy that is tailored to their unique business needs. According to Gartner, while 92% of CIOs anticipate AI will be integrated into their organizations by 2025 - broadly driven by increasing pressure from CEOs and boards - 49% of leaders admit their organizations struggle to assess and showcase AI's value. That's where the strategy kicks in. ... Forward-looking CIOs are focused on using data for decision-making while tackling challenges related to its quality and availability. Data governance is a crucial aspect to deal with. As data systems become more interconnected, managing complexity is crucial. Going forward, CIOs will have to focus on optimizing current systems, high level of data literacy, complexity management and establishing strong governance. The importance of shifting IT from a cost center to a profit driver lies in focusing on data-driven revenue generation, said Eric Johnson ... CIOs should be able to communicate the strategic use of IT investment and present it as a core enabler for competitiveness. 


5 Ways to Reduce SaaS Security Risks

It's important to understand what corporate assets are visible to attackers externally and, therefore, could be a target. Arguably, the SaaS attack surface extends to every SaaS, IaaS and PaaS application, account, user credential, OAuth grant, API, and SaaS supplier used in your organization—managed or unmanaged. Monitoring this attack surface can feel like a Sisyphean task, given that any user with a credit card, or even just a corporate email address, has the power to expand the organization's attack surface in just a few clicks. ... Single sign-on (SSO) provides a centralized place to manage employees' access to enterprise SaaS applications, which makes it an integral part of any modern SaaS identity and access governance program. Most organizations strive to ensure that all business-critical applications (i.e., those that handle customer data, financial data, source code, etc.) are enrolled in SSO. However, when new SaaS applications are introduced outside of IT governance processes, this makes it difficult to truly assess SSO coverage. ... Multi-factor authentication adds an extra layer of security to protect user accounts from unauthorized access. By requiring multiple factors for verification, such as a password and a unique code sent to a mobile device, it significantly decreases the chances of hackers gaining access to sensitive information. 


World’s smallest quantum computer unveiled, solves problems with just 1 photon

In the new study, the researchers successfully implemented Shor’s algorithm using a single photon by encoding and manipulating 32 time-bin modes within its wave packet. This achievement highlights the strong information-processing capabilities of a single photon in high dimensions. According to the team, with commercially available electro-optic modulators capable of 40 GHz bandwidth, it is feasible to encode over 5,000 time-bin modes on long single photons. While managing high-dimensional states can be more challenging than working with qubits, this work demonstrates that these time-bin states can be prepared and manipulated efficiently using a compact programmable fiber loop. Additionally, high-dimensional quantum gates can enhance manipulation, using multiple photons for scalability. Reducing the number of single-photon sources and detectors can improve the efficiency of counting coincidences over accidental counts. Research indicates that high-dimensional states are more resistant to noise in quantum channels, making time-bin-encoded states of long single photons promising for future high-dimensional quantum computing.


Google creates the Mother of all Computers: One trillion operations per second and a mystery

The capability of exascale computing to handle massive amounts of data and run through simulation has created new avenues for scientific modeling. From mimicking black holes and the birth of galaxies to introducing newer and evolved treatments and diagnoses through customized genome mapping across the globe, this technology has all the potential to burst open newer frontiers of knowledge about the cosmos. While current supercomputers would otherwise spend years solving computations, exascale machines will pave the way to areas of knowledge that were previously uncharted. For instance, the exascale solution in astrophysics holds the prospect of modeling many phenomena, such as star and galaxy formation, with higher accuracy. For example, these simulations could reveal new detections of the fundamental laws of physics and be used to answer questions about the universe’s formation. In addition, in fields like particle physics, researchers could analyze data from high-energy experiments far more efficiently and perhaps discover more about the nature of matter in the universe. AI is another area to benefit from exascale computing for a supercharge in performance. Present models of AI are very efficient, but the current computing machines constrain them. 


Taming the Perimeter-less Nature of Global Area Networks

The availability of data and intelligence from across the global span of the network is significantly effective in helping ITOps teams understand all the component services and providers their business has exposure to or reliance on. It means being able to pinpoint an impending problem or the root cause of a developing issue within their global area network and to pursue remediation with the right third-party provider ... Certain traffic engineering actions taken on owned infrastructure can alter connectivity and performance by altering the path that traffic takes in the unowned infrastructure portion of the global area network. Consider these actions as adjustments to a network segment that is within your control, such as a network prefix or a BGP route change to bypass a route hijack happening downstream in the unowned Internet-based segment. These traffic engineering actions are manageable tasks that ITOps teams or their automated systems can execute within a global area network setup. While they are implemented in the parts of the network directly controlled by ITOps, their impact is designed to span the entire service delivery chain and its performance. 


Firms use AI to keep reality from unreeling amid ‘global deepfake pandemic’

Seattle-based Nametag has announced the launch of its Nametag Deepfake Defense product. A release quotes security technologist and cryptography expert Bruce Schneier, who says “Nametag’s Deepfake Defense engine is the first scalable solution for remote identity verification that’s capable of blocking the AI deepfake attacks plaguing enterprises.” And make no mistake, says Nametag CEO Aaron Painter: “we’re facing a global deepfake pandemic that’s spreading ransomware and disinformation.” The company cites numbers from Deloitte showing that over 50 percent of C-suite executives expect an increase in the number and size of deepfake attacks over the next 12 months. Deepfake Defense consists of three core proprietary technologies: Cryptographic Attestation, Adaptive Document Verification and Spatial Selfie. The first “blocks digital injection attacks and ensures data integrity using hardware-backed keystore assurance and secure enclave technology from Apple and Google.” The second “prevents ID presentation attacks using proprietary AI models and device telemetry that detect even the most sophisticated digital manipulation or forgery.” 


Evolving Data Governance in the Age of AI: Insights from Industry Experts

While evolving existing data governance to meet AI needs is crucial, many organizations need to advance their DG first, before delving into AI governance. Existing data quality does not cover AI requirements. As mentioned in the previous section, currently DG programs enforce roles, procedures and tools for some structured data throughout the company. Yet AI models learn from and use very large data sets, containing structured and unstructured data. All this data needs to be of good quality too, so that the AI model can respond accurately, completely, consistently, and relevantly. Companies frequently struggle to determine if their unstructured data, including videos and PowerPoint slides, is of sufficient quality for AI training and implementation. If organizations don’t address this issue, Haskell said, they “throw dollars at AI and AI tools,” because the bad data quality inputted gets outputted. For this reason, the pressures of data quality fundamentals and clean-up take higher importance over the drive to implement AI. O’Neal likened AI and its governance to an iceberg. The CEO and senior management see only the tip, visible with all of AI’s promise and reward. 


On the Road to 2035, Banking Will Walk One of These Three Paths

Economist Impact’s latest report walks through three different potential scenarios that the banking sector will zero in on by 2035. Each paints a vivid picture of how technological advancements, shifting consumer expectations and evolving global dynamics could reshape the financial world as we know it. ... Digital transformation will be central to banking’s future, regardless of which scenario unfolds. Banks that fail to innovate and adapt to new technologies risk becoming obsolete. Trust will be a critical currency in the banking sector of 2035. Whether it’s through enhanced data protection, ethical AI use, or commitment to sustainability, banks must find ways to build and maintain customer trust in an increasingly complex world. The role of banks is likely to expand beyond traditional financial services. In all scenarios, we see banks taking on new responsibilities, whether it’s driving sustainable development, bridging geopolitical divides, or serving as the backbone for broader digital ecosystems. Flexibility and adaptability will be crucial for success. The future is uncertain and potentially fragmented, requiring banks to be agile in their strategies and operations to thrive in various possible environments.



Quote for the day:

"The secret of my success is a two word answer: Know people." -- Harvey S. Firestone

Daily Tech Digest - October 17, 2024

Digital addiction detox: Streamline tech to maximize impact, minimize risks

While digital addiction has been extensively studied at the individual level, organizational digital addiction is a relatively new area of concern. This addiction manifests as a tendency for the organization to throw technology mindlessly at any problem, often accumulating useless or misused technologies that generate ongoing costs without delivering proportional value. ... CIOs must simultaneously implement controls to prevent their organizations from reaching a tipping point where healthy exploration transforms into digital addiction. Striking this balance is delicate and requires careful management. Many innovative technology companies have found success by implementing “runways” for new products or technologies. These runways come with specific criteria for either “takeoff” or “takedown”. ... Unchecked technology adoption poses significant risks to organizations, often leading to vulnerabilities in their IT ecosystems. When companies rush to implement technologies without proper planning and safeguards, they lack the resilience to bounce back from adverse conditions because of insufficient redundancy and flexibility within systems, leaving organizations exposed to single points of failure.


Why are we still confused about cloud security?

A prevalent issue is publicly exposed storage, which often includes sensitive data due to excessive permissions, making it a prime target for ransomware attacks. Additionally, the improper use of access keys remains a significant threat, with a staggering 84% of organizations retaining unused highly privileged keys. Such security oversights have historically facilitated breaches, as evidenced by incidents like the MGM Resorts data breach in September 2023. ... Kubernetes environments present another layer of risk. The study notes that 78% of organizations have publicly accessible Kubernetes API servers, with significant portions allowing inbound internet access and unrestricted user control. This lax security posture exacerbates potential vulnerabilities. Addressing these vulnerabilities demands a comprehensive approach. Organizations should adopt a context-driven security ethos by integrating identity, vulnerability, misconfiguration, and data risk information. This unified strategy allows for precise risk assessment and prioritization. Managing Kubernetes access through adherence to Pod Security Standards and limiting privileged containers is essential, as is the regular audit of credentials and permissions to enforce the principle of least privilege.


The Architect’s Guide to Interoperability in the AI Data Stack

At the heart of an AI-driven world is data — lots of it. The choices you make today for storing, processing and analyzing data will directly affect your agility tomorrow. Architecting for interoperability means selecting tools that play nicely across environments, reducing reliance on any single vendor, and allowing your organization to shop for the best pricing or feature set at any given moment. ... Interoperability extends to query engines as well. Clickhouse, Dremio and Trino are great examples of tools that let you query data from multiple sources without needing to migrate it. These tools allow users to connect to a wide range of sources, from cloud data warehouses like Snowflake to traditional databases such as MySQL, PostgreSQL and Microsoft SQL Server. With modern query engines, you can run complex queries on data wherever it resides, helping avoid costly and time-consuming migrations. ... Architecting for interoperability is not just about avoiding vendor lock-in; it’s about building an AI data stack that’s resilient, flexible and cost-effective. By selecting tools that prioritize open standards, you ensure that your organization can evolve and adapt to new technologies without being constrained by legacy decisions. 


The role of compromised cyber-physical devices in modern cyberattacks

A cyber physical device is a device that connects the physical world and computer networks. Many people may associate the term “cyber physical device” with Supervisory Control and Data Acquisition (SCADA) systems and OT network segments, but there’s more to it. Devices that interconnect the physical world give attackers a unique perspective: they allow them to perform on-ground observation of events, to monitor and observe the impact of their attacks, and can even sometimes make an impact on the physical world ... Many devices are compromised for the simple purpose of creating points of presence at new locations, so attackers can bypass geofencing restrictions. These devices are often joined and used as a part of overlay networks. Many of these devices are not traditional routers but could be anything from temperature sensors to cameras. We have even seen compromised museum Android display boards in some countries. ... Realistically, I don’t believe there is a way to decrease number of compromised devices. We are moving towards networks where IoT devices will be one of the predominant types of connected devices, with things like a dish washer or fridge having an IP address. 


Security at the Edge Needs More Attention

CISOs should verify that the tools they acquire and use do what they claim to do, or they may be in for surprises. Meanwhile, data and IP are at risk because it’s so easy to sign up for and use third-party cloud services and SaaS that the average users may not associate their data usage with organizational risk. “Users submitting spreadsheet formula problems to online help forms may inadvertently be sharing corporate data. People running grammar checking tools on emails or documents may be doing the same,” says Roger Grimes, data-driven defense evangelist at security awareness training and simulated phishing platform KnowBe4 in an email interview. “It's far too easy for someone using an AI-enabled tool to not realize they are inadvertently leaking confidential information outside their organizational environment.”  ... It’s important for CISOs to have knowledge of and visibility into every asset in their company’s tech stack, though some CISOs see room for improvement. “You spend a lot of time and money on people, processes and technology to develop a layered security approach and defense in depth, and that doesn't work if you don't know you have something to defend there,” says Fowler.


CIOs must also serve as chief AI officers, according to Salesforce survey

CIOs are now in the business of manufacturing intelligence and work-autonomous work. CIOs are now responsible for creating a work environment where humans and AI agents can collaborate and co-create stakeholder value -- employees, customers, partners, and communities. CIOs must design, own, and deliver the roadmap to the autonomous enterprise, where autonomous work is maturing at Lightspeed. ... CIOs are under pressure to quickly learn about, and implement, effective AI solutions in their businesses. While more than three of five CIOs think stakeholder expectations for their AI expertise are unrealistic, only 9% believe their peers are more knowledgeable. CIOs are also partnering with analyst firms (Gartner, Forrester, IDC, etc.) and technology vendors to learn more about AI. ... Sixty-one percent of CIOs feel they're expected to know more about AI than they do, and their peers at other companies are their top sources of information. CIOs must become better AI storytellers. In 1994, Steve Jobs said: "The most powerful person in the world is the storyteller. The storyteller sets the vision, values, and agenda of an entire generation that is to come." There is no better time than now for CIOs to lead the business transformation towards becoming AI-led companies.


Policing and facial recognition: What’s stopping them?

The question contains two “ifs” and a presumption; all are carrying a lot of weight. The first “if” is the legal basis for using FRT. Do the police have the power to use it? In England and Wales the police certainly have statutory powers to take and retain images of people, along with common law powers to obtain and store information about the citizen’s behavior in public. The government’s own Surveillance Camera Code of Practice (currently on policy’s death row) provides guidance to chief officers on how to do this and on operating overt surveillance systems in public places generally. The Court of Appeal found a “sufficient legal framework” covered police use of FRT, one that was capable of supporting its lawful deployment. ... The second “if” relates to the technology i.e. “if FRT works, what’s stopping the police from using it?” Since a shaky introduction around 2015 when it didn’t work as hoped (or required) police facial recognition technology has come on significantly. The accuracy of the technology is much better but is it accurate to say it now “works”? Each technology partner and purchasing police force must answer that for themselves – as for any other operational capability. That’s accountability. 


How AI is becoming a powerful tool for offensive cybersecurity practitioners

What makes offensive security all the more important is that it addresses a potential blind spot for developers. “As builders of software, we tend to think about using whatever we’ve developed in the ways that it’s intended to be used,” says Caroline Wong, chief strategy officer at Cobalt Labs, a penetration testing company. In other words, Wong says, there can be a bias towards overemphasizing the good ways in which software can be used, while overlooking misuse and abuse cases or disregarding potentially harmful uses. “One of the best ways to identify where and how an organization or a piece of software might be susceptible to attack is by taking on the perspective of a malicious person: the attacker’s mindset,” Wong says. ... In addition to addressing manpower issues, AI can assist practitioners in scaling up their operations. “AI’s ability to process vast datasets and simulate large-scale attacks without human intervention allows for testing more frequently and on a broader scale,” says Augusto Barros, a cyber evangelist at Securonix, a security analytics and operations management platform provider. “In large or complex environments, human operators would struggle to perform consistent and exhaustive tests across all systems,” Barros says. 


While Cyberattacks Are Inevitable, Resilience Is Vital

Cybersecurity is all about understanding risk and applying the basic controls and sprinkling in new technologies to keep the bad guys out and keeping the system up and running by eliminating as much unplanned downtime as possible. “Cybersecurity is a risk game—as long as computers are required to deliver critical products and services, they will have some vulnerability to an attack,” Carrigan said. “Risk is a simple equation: Risk = Likelihood x Consequence. Most of our investments have been in reducing the ‘likelihood’ side of the equation. The future of OT cybersecurity will be in reducing the consequences of cyberattacks—specifically, how to minimize the impact of infiltration and restore operations within an acceptable period.” Manufacturers must understand their risk appetite and know what and where their organization’s crown jewels are and how to protect them. “Applying the same security practices to all OT assets is not practical—some are more important than others, even within the same company and the same OT network,” Carrigan said. Remaining resilient to a cyber incident—any kind of incident—means manufacturers must apply the basics, sprinkle in some new technologies and plan, test, revise and then start that process all over again. 


AI-Powered DevOps: Best Practices for Business Adoption

In security, AI tools are proving highly effective at proactively identifying and addressing vulnerabilities, boosting threat detection capabilities, and automating responses to emerging risks. Nonetheless, significant potential for AI remains in phases such as release management, deployment, platform engineering, and planning. These stages, which are crucial for ensuring software stability and scalability, could greatly benefit from AI's predictive abilities, resource optimization, and the streamlining of operational and maintenance processes. ... While generative AI and AI copilots have been instrumental in driving adoption of this technology, there remains a major shortage of AI expertise within DevOps. This gap is significant, especially given that humans remain deeply involved in the process, with over two-thirds of our respondents indicating they manually review AI-generated outputs at least half the time. To address these challenges, organizations should devise specialized training courses to properly equip their DevOps teams with the skills to leverage AI tools. Whether through industry-recognized courses or internal programs, encouraging certification can enhance technical expertise significantly.



Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak

Daily Tech Digest - October 16, 2024

AI Models in Cybersecurity: From Misuse to Abuse

In a constant game of whack-a-mole, both defenders and attackers are harnessing AI to tip the balance of power in their respective favor. Before we can understand how defenders and attackers leverage AI, we need to acknowledge the three most common types of AI models currently in circulation. ... Generative AI, Supervised Machine Learning, and Unsupervised Machine Learning are three main types of AI models. Generative AI tools such as ChatGPT, Gemini, and Copilot can understand human input and can deliver outputs in a human-like response. Notably, generative AI continuously refines its outputs based on user interactions, setting it apart from traditional AI systems. Unsupervised machine learning models are great at analyzing and identifying patterns in vast unstructured or unlabeled data. Alternatively, supervised machine learning algorithms make predictions from well-labeled, well-tagged, and well-structured datasets. ... Despite the media hype, the usage of AI by cybercriminals is still at nascent stage. This doesn’t mean that AI is not being exploited for malicious purposes, but it’s also not causing the decline of human civilization like some purport it to be. Cybercriminals use AI for very specific tasks


Meet Aria: The New Open Source Multimodal AI That's Rivaling Big Tech

Rhymes AI has released Aria under the Apache 2.0 license, allowing developers and researchers to adapt and build upon the model. It is also a very powerful addition to an expanding pool of open-source AI models led by Meta and Mistral, which perform similarly to the more popular and adopted closed-source models. Aria's versatility also shines across various tasks. In the research paper, the team explained how they fed the model with an entire financial report and it was capable of performing an accurate analysis, it can extract data from reports, calculate profit margins, and provide detailed breakdowns. When tasked with weather data visualization, Aria not only extracted the relevant information but also generated Python code to create graphs, complete with formatting details. The model's video processing capabilities also seem promising. In one evaluation, Aria dissected an hour-long video about Michelangelo's David, identifying 19 distinct scenes with start and end times, titles, and descriptions. This isn't simple keyword matching but a demonstration of context-driven understanding. Coding is another area where Aria excels. It can watch video tutorials, extract code snippets, and even debug them. 


Preparing for IT failures in an unpredictable digital world

By embracing multiple vendors and hybrid cloud environments, organizations would be better prepared so that if one platform goes down, the others can pick up the slack. While this strategy increases ecosystem complexity, it buys down the risk accepted by ensuring you’re prepared to recover and resilient to widespread outages in complex, hybrid, and cloud-based environments. ... It’s clear that IT failures aren’t just a possibility — they are inevitable. Simply waiting for things to go wrong before reacting is a high-risk approach that’s asking for trouble. Instead, organizations must go on the front foot and adopt a strategy that focuses on early detection, continuous monitoring, and risk prevention. This means planning for worst-case scenarios, but also preparing for recovery. After all, one of the planks of IT infrastructure management is business continuity. It’s about optimal performance when things are going well while ensuring that systems recover quickly and continue operating even in the face of major disruptions. This requires a holistic approach to IT management, where failures are anticipated, and recovery plans are in place. 


CIOs must adopt startup agility to compete with tech firms

CIOs often struggle with soft skills, despite knowing what needs to be done. We engage with CEOs and CFOs to foster alignment among the leadership team, as strong support from them is crucial. CIOs also need help gaining buy-in from other CXOs, particularly when it comes to automation initiatives. Our approach emphasises unlocking bandwidth within IT departments. If 90% of their resources are spent on running the business, there’s little time for innovation. We help them automate routine tasks, which allows their best people to focus on transformative efforts. ... CIOs play a crucial role in driving innovation and maintaining cost efficiency while justifying tech investments, especially as organisations become digital-first. A key challenge is controlling cloud costs, which often escalate as IT spending moves outside central control. To counter this, CIOs should streamline access to central services, reduce redundant purchases, and negotiate larger contracts for better discounts. They must also recognise that cloud services are not always cheaper; cost-efficiency depends on application types and usage. 


AI makes edge computing more relevant to CIOs

Many user-facing situations could benefit from edge-based AI. Payton emphasizes facial recognition technology, real-time traffic updates for semi-autonomous vehicles, and data-driven enhancements on connected devices and smartphones as possible areas. “In retail, AI can deliver personalized experiences in real-time through smart devices,” she says. “In healthcare, edge-based AI in wearables can alert medical professionals immediately when it detects anomalies, potentially saving lives.” And a clear win for AI and edge computing is within smart cities, says Bizagi’s Vázquez. There are numerous ways AI models at the edge could help beyond simply controlling traffic lights, he says, such as citizen safety, autonomous transportation, smart grids, and self-healing infrastructures. To his point, experiments with AI are already being carried out in cities such as Bahrain, Glasgow, and Las Vegas to enhance urban planning, ease traffic flow, and aid public safety. Self-administered, intelligent infrastructure is certainly top of mind for Dairyland’s Melby since efforts within the energy industry are underway to use AI to meet emission goals, transition into renewables, and increase the resilience of the grid.


Deepfake detection is a continuous process of keeping up with AI-driven fraud: BioID

BioID is part of the growing ecosystem of firms offering algorithmic defenses to algorithmic attacks. It provides an automated, real-time deepfake detection tool for photos and videos that analyzes individual frames and video sequences, looking for inter-frame or video codec anomalies. Its algorithm is the product of a German research initiative that brought together a number of institutions across sectors to collaborate on deepfake detection strategy. But it is also continuing to refine its neural network to keep up with the relentless pace of AI fraud. “We are in an ongoing fight of AI against AI,” Freiberg says. “We can’t just just lean back and relax and sell what we have. We’re continuously working on increasing the accuracy of our algorithms.” That said, Freiberg is not only offering doom and gloom. She points to the Ukrainian Ministry of Foreign Affairs AI ambassador, Victoria Shi, as an example of deepfake technology used with non-fraudulent intention. The silver lining is reflected in the branding of BioID’s “playground” for AI deepfake testing. At playground.bioid.com, users can upload media to have BioID judge whether or not it is genuine.


How Manufacturing Best Practices Shape Software Development

Manufacturers rely on bills of materials (BOMs) to track every component in their products. This transparency enables them to swiftly pinpoint the source of any issues that arise, ensuring they have a comprehensive understanding of their supply chain. In software, this same principle is applied through software bills of materials (SBOMs), which list all the components, dependencies and licenses used in a software application. SBOMs are increasingly becoming critical resources for managing software supply chains, enabling developers and security teams to maintain visibility over what’s being used in their applications. Without an SBOM, organizations risk being unaware of outdated or vulnerable components in their software, making it difficult to address security issues. ... It’s nearly impossible to monitor open source components manually at scale. But with software composition analysis, developers can automate the process of identifying security risks and ensuring compliance. Automation not only accelerates development but also reduces the risk of human error, so teams can manage vast numbers of components and dependencies efficiently.


Striking The Right Balance Between AI & Innovation & Evolving Regulation

The bottom line is that integrating AI comes with complex challenges to how an organisation approaches data privacy. A significant part of this challenge relates to purpose limitation – specifically, the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. To tackle this hurdle, it’s vital that organisations maintain a high level of transparency that discloses to users and consumers how the use of their data is evolving as AI is integrated. ... Just as the technology landscape has evolved, so have consumer expectations. Today, consumers are more conscious of and concerned with how their data is used. Adding to this, nearly two-thirds of consumers worry about AI systems lacking human oversight, and 93% believe irresponsible AI practices damage company reputations. As such, it’s vital that organisations are continuously working to maintain consumer trust as part of their AI strategy. With this said, there are many consumers who are willing to share their data as long as they receive a better personalised customer experience, showcasing that this is a nuanced landscape that requires attention and balance.


WasmGC and the future of front-end Java development

The approach being offered by the WasmGC extension is newer. The extension provides a generic garbage collection layer that your software can refer to; a kind of garbage collection layer built into WebAssembly. Wasm by itself doesn’t track references to variables and data structures, so the addition of garbage collection also implies introducing new “typed references” into the specification. This effort is happening gradually: recent implementations support garbage collection on “linear” reference types like integers, but complex types like objects and structs have also been added. ... The performance potential of languages like Java over JavaScript is a key motivation for WasmGC, but obviously there’s also the enormous range of available functionality and styles among garbage-collected platforms. The possibility for moving custom code into Wasm, and thereby making it universally deployable, including to the browser, is there. More broadly, one can’t help but wonder about the possibility of opening up the browser to other languages beyond JavaScript, which could spark a real sea-change to the software industry. It’s possible that loosening JavaScript’s monopoly on the browser will instigate a renaissance of creativity in programming languages.


Mind Your Language Models: An Approach to Architecting Intelligent Systems

The reason why we wanted a smaller model that's adapted to a certain task is, it's easier to operate, and when you're running LLMs, it's going to be much economical, because you can't run massive models all the time because it's very expensive and takes a lot of GPUs. Currently, we're struggling with getting GPUs in AWS. We searched all EU Frankfurt, Ireland, North Virginia. It's seriously a challenge now to get big GPUs to host your LLMs. The second part of the problem is, we started getting data. It's high quality. We started improving the knowledge graph. The one thing that is interesting when you think about semantic search is that when people interact with your system, even if they're working on the same problem, they don't end up using the same language. Which means that you need to be able to translate or understand the range of language that your users can actually interact with your system. ... We converted these facts with all of their synonyms, with all of the different ways one could potentially ask for this piece of data, and put everything into the knowledge graph itself. You could use LLMs to generate training data for your smaller models. 



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - October 15, 2024

The NHI management challenge: When employees leave

Non-human identities (NHIs) support machine-to-machine authentication and access across software infrastructure and applications. These digital constructs enable automated processes, services, and applications to authenticate and perform tasks securely, without direct human intervention. Access is granted to NHIs through various types of authentications, including secrets such as access keys, certificates and tokens. ... When an employee exits, secrets can go with them. Those secrets – credentials, NHIs and associated workflows – can be exfiltrated from mental memory, recorded manually, stored in vaults and keychains, on removable media, and more. Secrets that have been exfiltrated are considered “leaked.” ... An equally great risk is that employees, especially developers, create, deploy and manage secrets as part of software stacks and configurations, as one-time events or in regular workflows. When they exit, those secrets can become orphans, whose very existence is unknown to colleagues or to tools and frameworks. ... The lifecycle of NHIs can stretch beyond the boundaries of a single organization, encompassing partners, suppliers, customers and other third parties. 


How Ernst & Young’s AI platform is ‘radically’ reshaping operations

We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs. Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills — they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.” ... Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.


The Truth Behind the Star Health Breach: A Story of Cybercrime, Disinformation, and Trust

The email that xenZen used as “evidence” was forged. The hacker altered the HTML code of an email using the common “inspect element” function—an easy trick to manipulate how a webpage appears. This allowed him to make it seem as though the email came directly from the CISO’s official account. ... XenZen’s attack demonstrates how cybercriminals are evolving. They are using psychological warfare to create chaos. In this case, xenZen not only exploited a vulnerability but also fabricated evidence to frame the CISO. The security community needs to stay vigilant and anticipate attacks that may target not just systems but also individuals and organizations through disinformation. ... Making the CISO a scapegoat for security breaches without proper evidence is a growing concern. Organizations must understand the complexities of cybersecurity and avoid jumping to conclusions. Security teams should have the support they need, including legal protection and clear communication channels. Transparency is essential, but so is the careful handling of internal investigations before pointing fingers.


How CIOs and CTOs Are Bridging Cross-Functional Collaboration

Ashwin Ballal, CIO at software company Freshworks, believes that the organizations that fail to collaborate well across departments are leaving money on the table. “Siloed communications create inefficiencies, leading to duplicative work, poor performance, and a negative employee experience. In my experience as a CIO, prioritizing cross-departmental communication has been essential to overcoming these challenges,” says Ballal. His team continually reevaluates the tech stack, collaborating with leaders and users to confirm that the organization is only investing in software that adds value. This approach saves money and helps keep employees engaged by minimizing their interactions with outdated technology. He also uses employees as product beta testers, and their feedback impacts the product roadmap. ... “My recommendation for other CIOs and CTOs is to regularly meet with departmental leaders to understand how technology interacts across the organization. Sending out regular surveys can yield candid feedback on what’s working and what isn’t. Additionally fostering an environment where employees can experiment with new technologies encourages innovation and problem-solving.”


2025 Is the Year of AI PCs; Are Businesses Onboard?

With the rise of real-time computing needs and the proliferation of IoT devices, businesses are realizing the need to move AI closer to where the data is - at the edge. This is where AI PCs come into play. Unlike their traditional counterparts, AI PCs are integrated with neural processing units, NPUs, that enable them to handle AI workloads locally, reducing latency and providing a more secure computing environment. "The anticipated surge in AI PCs is largely due to the supply-side push, as NPUs will be included in more CPU vendor road maps," said Ranjit Atwal, senior research director analyst at Gartner. NPUs allow enterprises to move from reactive to proactive IT strategies. Companies can use AI PCs to predict IT infrastructure failures before they happen, minimizing downtime and saving millions in operational costs. NPU-integrated PCs also allow enterprises to process AI-related tasks, such as machine learning, natural language processing and real-time analytics, directly on the device without relying on cloud-based services. And with generative AI becoming part of enterprise technology stacks, companies investing in AI PCs are essentially future-proofing their operations, preparing for a time when gen AI capabilities become a standard part of business tools.


Australia’s Cyber Security Strategy in Action – Three New Draft Laws Published

Australia is following in the footsteps of other jurisdictions such as the United States by establishing a Cyber Review Board. The Board’s remit will be to conduct no-fault, post-incident reviews of significant cyber security incidents in Australia. The intent is to strengthen cyber resilience, by providing recommendations to Government and industry based on lessons learned from previous incidents. Limited information gathering powers will be granted to the Board, so it will largely rely on cooperation by impacted businesses. ... Mandatory security standards for smart devices - The Cyber Security Bill also establishes a framework under which mandatory security standards for smart devices will be issued. Suppliers of smart devices will be prevented from supplying devices which do not meet these security standards, and will be required to provide statements of compliance for devices manufactured in Australia or supplied to the Australian market. The Secretary of Home Affairs will be given the power to issue enforcement notices (including compliance, stop and recall notices) if a certificate of compliance for a specific device cannot be verified.


The Role of Zero Trust Network Access Tools in Ransomware Recovery

By integrating with existing identity providers, Zero Trust Network Access ensures that only authenticated and authorized users can access specific applications. This identity-driven approach, combined with device posture assessments and real-time threat intelligence, provides a robust defense against unauthorized access during a ransomware recovery. Moreover, ZTNA’s application-layer security means that even if a user’s credentials are compromised, the attacker would only gain access to specific applications rather than the entire network. This granular access control is crucial in containing ransomware attacks and preventing lateral movement across the network. ... As a cloud-native solution, ZTNA can easily scale to meet the demands of organizations of all sizes, from small businesses to large enterprises. This scalability is particularly valuable during a ransomware recovery, where the need for secure access may fluctuate based on the number of systems and users involved. ZTNA’s flexibility also allows it to integrate with various IT environments, including hybrid and multi-cloud infrastructures. This adaptability ensures that organizations can deploy ZTNA without the need for significant changes to their existing setups, making it an ideal solution for dynamic environments.


What Is Server Consolidation and How Can It Improve Data Center Efficiency?

Server consolidation is the process of migrating workloads from multiple underutilized servers into a smaller collection of servers. ... although server consolidation typically focuses on consolidating physical servers, it can also apply to virtual servers. For instance, if you have five virtual hosts running on the same physical server, you might consolidate them into just three or virtual hosts. Doing so would reduce the resources wasted on hypervisor overhead, allowing you to maximize the return on investment from your server hardware. ... To determine whether server consolidation will reduce energy usage, you’ll have to calculate the energy needs of your servers. Typically, power supplies indicate how many watts of electricity they supply to servers. Using this number, you can compare how energy requirements vary between machines. Keep in mind, however, that actual energy consumption will vary depending on factors like CPU clock speed and how active server CPUs are. So, in addition to comparing the wattage ratings on power supplies, you should track how much electricity your servers actually consume, and how that metric changes before and after you consolidate servers.


How DDoS Botent is used to Infect your Network?

The threat posed by DDoS botnets remains significant and complex. As these malicious networks grow more sophisticated, understanding their mechanisms and potential impacts is crucial for organizations. DDoS botnets not only facilitate financial theft and data breaches but also enable large-scale spam and phishing campaigns that can undermine trust and security. To effectively defend against these threats, organizations must prioritize proactive measures, including regular updates, robust security protocols, and vigilant monitoring of network activity. By implementing strategies to identify and mitigate botnet attacks, businesses can safeguard their systems and data from potential harm. Ultimately, a comprehensive understanding of how DDoS botnets operate—and the strategies to combat them—will empower organizations to navigate the challenges of cybersecurity and maintain a secure digital environment. As a CERT-In empanelled organization, Kratikal is equipped to enhance your understanding of potential risks. Our manual and automated Vulnerability Assessment and Penetration Testing (VAPT) services proficiently discover, detect, and assess vulnerabilities within your IT infrastructure. 


Banks Must Try the Flip Side of Embedded Finance: Embedded Fintech

With a one-way-street perspective on embedded finance, the idea is that if payment volume is moving to tech companies then banks should power the back end of the tech experience. This is a good start but the threat from fintech companies to retail banks will only continue to deepen in the future. Customer adoption is higher than ever for some fintechs like Chime and Nubank, for example. A better approach would be for banks to use embedded fintech to improve customer experience by upgrading banks’ tech offerings to retain customers and grow within their customer base. Embedded fintech can help these organizations stay competitive technologically. ... There are many opportunities for innovation with embedded payroll. Banks are uniquely positioned to offer tailored payroll solutions that map to what small businesses today want. Payroll is complex and needs to be compliant to avoid hefty penalties. Embedded payroll lets banks offload costs, burdens and risks associated with payroll. Banks can offer faster payroll with less risk when they hold the accounts for employers and payees. They can also give business customers a fuller picture of their cash flow, offering them peace of mind. 



Quote for the day:

"Pull the string and it will follow wherever you wish. Push it and it will go nowhere at all." -- Dwight D. Eisenhower