Daily Tech Digest - October 21, 2024

Choosing the Right Tech Stack: The Key to Successful App Development

Choosing the right tech stack is critical because the tech stack you opt to use will shape virtually every aspect of your development project. It determines which programming language you can use, as well as which modules, libraries, and other pre-built components you can take advantage of to speed development. It has implications for security, since some tech stacks are easier to secure than others. It influences the application performance and operating cost because it plays an important role in determining how many resources the application will consume. And so on. ... Building a secure application is important in any context. But if you face special compliance requirements — for example, if you're building a finance or healthcare app, which are subject to special compliance mandates in many places — you may need to guarantee an extra level of security. To that end, make sure the tech stack you choose offers whichever level of security controls you need to meet your compliance requirements. A tech stack alone won't guarantee that your app is compliant, but choosing the right tech stack makes it easier for you to build a compliant app.


What is hybrid AI?

Rather than relying on a single method, hybrid AI integrates various systems, such as rule-based symbolic reasoning, machine learning and deep learning, to create systems that can reason, learn, and adapt more effectively than AI systems that have not been integrated with others. ... Symbolic AI, which is often referred to as rule-based AI, focuses on using logic and explicit rules to solve problems. It excels in reasoning, structured data processing and interpretability but struggles with handling unstructured data or large-scale problems. Machine learning (ML), on the other hand, is data-driven and excels at pattern recognition and prediction. It works well when paired with large datasets, identifying trends without needing explicit rules. However, ML models are often difficult to interpret and may struggle with tasks requiring logical reasoning. Hybrid AI that combines symbolic AI with machine learning makes the most of the reasoning power of symbolic systems as well as the adaptability of machine learning. For instance, a system could use symbolic AI to follow medical guidelines for diagnosing a patient, while machine learning analyses patient records and test results to offer individual recommendations.


6 Roadblocks to IT innovation

Innovation doesn’t happen by happenstance, says Sean McCormack, a seasoned tech exec who has led innovation efforts at multiple companies. True, someone might have an idea that seemingly comes out of the blue, but that person needs a runway to turn that inspiration into innovation that takes flight. That runway is missing in a lot of organizations. “Oftentimes there’s no formal process or approach,” McCormack says. Consequently, inspired workers must try to muscle through their bright ideas as best they can; they often fail due to the lack of support and structure that would bring the money, sponsors, and skills needed to build and test it. “You have to be purposeful with how you approach innovation,” says McCormack, now CIO at First Student, North America’s largest provider of student transportation. ... Taking a purposeful approach enables innovation in several ways, McCormack explains. First, it prioritizes promising ideas and funnels resources to those ideas, not weaker proposals. It also ensures promising ideas get attention rather than be put on a back burner while everyone deals with day-to-day tasks. And it prevents turf wars between groups, so, for example, a business unit won’t run away with an innovation that IT proposed.


Cyber Criminals Hate Cybersecurity Awareness Month

In the world of enterprises, the expectations for restoring data and backing up data at multi-petabyte scale have changed. IT teams need to increase next-generation data protection capabilities, while reducing overall IT spending. It gets even more complicated when you consider all the applications, databases, and file systems that generate different types of workloads. No matter what, the business needs the right data at the right time. To deliver this consistency, the data needs to be secured. Next-generation data protection starts when the data lands in the storage array. There needs to be high reliability with 100% availability. There also needs to be data integrity. Each time data is accessed, the storage system should check and verify the data to ensure the highest degree of data integrity. Cyber resilience best practices require that you ensure data validity, as well as near-instantaneous recovery of primary storage and backup repositories, regardless of the size. This accelerates disaster recovery when a cyberattack happens. Greater awareness of best practices in cyber resilience would be one of the crowning achievements of this October as Cybersecurity Awareness Month. Let’s make it so.


6 Strategies for Maximizing Cloud Storage ROI

Rising expenses in cloud data storage have prompted many organizations to reconsider their strategies, leading to a trend of repatriation as enterprises seek more control during these unpredictable economic times. A February 2024 Citrix poll revealed that 94% of organizations had shifted some workloads back to on-premises systems, driven by concerns over security, performance, costs, and compatibility. ... Common tactics of re-architecting applications, managing cloud sprawl and monitoring spend using the tools each cloud provides are a great first start. However, these methods are not the full picture. Storage optimization is an integral piece. Focusing on cloud storage costs first is a smart strategy since storage constitutes a large chunk of the overall spend. More than half of IT organizations (55%) will spend more than 30% of their IT budget on data storage and backup technology, according to our recent State of Unstructured Data Management report. The reality is that most organizations don’t have a clear idea on current and predicted storage costs. They do not know how to economize, how much data they have, or where it resides. 


As Software Code Proliferates, Security Debt Becomes a More Serious Threat

As AI-generated code proliferates, it compounds an already common problem, filling code bases with insecure code that will likely become security debt, increasing the risks to organizations. Just like financial debt, security debt can accrue quickly over time, the result of organizations compromising security measures in favor of convenience, speed or cost-cutting measures. Security debt, introduced by both first-party and third-party code, affects organizations of all sizes. More than 70% of organizations have security debt ingrained in their systems — and nearly half have critical debt. Over time, this accumulated debt poses serious risks because, as with financial debt, the bill will become due — potentially in the form of costly and consequential security breaches that can put an organization's data, reputation and overall stability at stake. ... Amid the dark clouds gathering over security debt, there is one silver lining. The number of high-severity flaws in organizations has been cut in half since 2016, which is clear evidence that organizations have made some progress in implementing secure software practices. It also demonstrates the tangible impact of quickly remediating critical security debt.


Why Liability Should Steer Compliance with the Cyber Security and Resilience Bill

First and foremost, the regulations are likely to involve an overhaul that will require a management focus. In the case of NIS2, for example, the board is tasked with taking responsibility for and maintaining oversight of the risk management strategy. This will require management bodies to undergo training themselves as well as to arrange training for their employees in order to equip themselves with sufficient knowledge and skills to identify risks and assess cybersecurity risk management practices. Yet NIS2 also breaks new ground in that it not only places responsibility for oversight of the risk strategy firmly at the feet of the board but goes on to state individuals could be held personally liable if they fail to exercise those responsibilities. Under article 32, authorities can temporarily prohibit any person responsible for discharging managerial responsibilities at CEO or a similar level from exercising managerial functions – in other words they can be suspended from office. We don’t know if the Cyber Security and Resilience Bill will take a similar tack but NIS2 is by no means alone in this approach. 


Tackling operational challenges in modern data centers

Supply chain bottlenecks continue to plague data centers, as shortages of critical components and materials lead to delays in shipping, sliding project timelines, and increased costs for customers. Many data center operators have become unable to meet their need for affected equipment such as generators, UPS batteries, transformers, servers, building materials, and other big-ticket items. This gap in availability is leading many to settle for any readily available items, even if not from their preferred vendor. ... The continuous heavy power consumption of data centers can strain local electrical utility systems with limited supply or transmission capacity. This poses a question of whether areas heavily populated with data centers, like Northern Virginia, Columbus, and Pittsburgh, have enough electricity capacity, and if they should only be permitted to use a certain percentage of grid power. ... Like the rest of the world, data centers are now facing a climate crisis as temperatures and weather events soar. Data centers are also seeking ways to increase their power load and serve higher client demand, without significantly increasing their electricity and emissions burdens. 


The AI-driven capabilities transforming the supply chain

In today’s supply chain environment, there really is no room for disruption — be it labor shortages, geopolitical strife or malfunctions within manufacturing. To keep up with demand, supply chain teams are focused on continuous improvement and finding ways to remove the burden on expensive manual labor in favor of automated, digital solutions. When faulty products come off the production line, it must be addressed quickly. AI can accelerate the resolution process faster than human labor in many instances — preventing production standstills and even catching errors before they occur. Engineers who are creating a product can lean on these insights too, using AI to assess all the errors that have happened in the past to make sure that they don’t happen in the future. ... Through camera footage and visual inspections, AI models can help detect errors, faults or defects in equipment before they happen. If the technology identifies an issue — or predicts the need for maintenance — teams can arrange for a technician to perform repairs. This predictive maintenance minimizes unplanned outages, reduces disruptions across the supply chain and optimizes asset performance.


What makes a great CISO

Security settings were once viewed as binary — on or off — but today, security programs need to be designed to help organizations adapt and respond with minimal impact when incidents occur. Response and resilience planning now involves cybersecurity and business operations teams, requiring the CISO to engage across the organization, especially during incidents. ... In the past, those with a SecOps background often focused on operational security, while those with a GRC background leaned toward prioritizing compliance to manage risk, according to Paul Connelly, former CISO now board advisor, independent director and CISO mentor. “Infosec requires a base competence in technology, but a CISO doesn’t have to be an engineer or developer,” says Connelly. A broad understanding of infosec responsibilities is needed, but the CISO can come from any part of the team, including IT or even internal audit. Exposure to different industries and companies brings a valuable diversity of thinking. Above all, modern CISOs must prioritize aligning security efforts with business objectives. “Individuals who have zig-zagged through an organization, getting wide exposure, are better prepared than someone who rose through the ranks focused in SecOps or another single area of focus,” says Connelly.



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee

Daily Tech Digest - October 20, 2024

6 Strategies for Overcoming the Weight of Process Debt

While technical debt is a more familiar concept stemming from software development that describes the cost of taking shortcuts or using quick fixes in code, process debt relates to inefficiencies and redundancies within organizational workflows and procedures. Process debt can also have far-reaching effects that are often less obvious to business leaders, making it an insidious force that can silently undermine business operations. ... Rather than simply adding a new technology into an old process or duplicating legacy steps in a new application, organizations need to undertake a detailed audit of existing processes to uncover inefficiencies, redundancies, and inaccuracies that contribute to process debt. This audit should involve a systematic review of all workflows, procedures, and operational activities to identify areas where performance is falling short or where resources are being wasted. To gain a deeper understanding, leverage process mapping tools to create visual representations of workflows. These tools allow you to document each step of a process, highlight how tasks flow between different departments or systems, and uncover hidden bottlenecks or points of friction.


Domain-specific GenAI is Coming to a Network Near You

Now, we're seeing domain-specific models crop up. These are specialized models that focus on some industry or incorporate domain best practices that can be centrally trained and then deployed and fine-tuned by organizations. They are built on specific knowledge sets rather than the generalized corpus of information on which conversational AI is trained. ... By adopting domain-specific generative AI, companies can achieve more accurate and relevant outcomes, reducing the risks associated with general-purpose models. This approach not only enhances productivity but also aligns AI capabilities with specific business needs. ... The question now is whether this specialization can be applied to domains like networking, security, and application delivery. Yes, but no. The truth is that predictive (classic) AI is going to change these technical domains forever. But it will do so from the inside-out; that is, predictive AI will deliver real-time analysis of traffic that enables an operational AI to act. That may well be generative AI if we are including agentic AI in that broad category. But GenAI will have an impact on how we operate networking, security, and application delivery. 


The human factor: How companies can prevent cloud disasters

A company’s post-mortem process reveals a great deal about its culture. Each of the top tech companies require teams to write post-mortems for significant outages. The report should describe the incident, explore its root causes and identify preventative actions. The post-mortem should be rigorous and held to a high standard, but the process should never single out individuals to blame. Post-mortem writing is a corrective exercise, not a punitive one. If an engineer made a mistake, there are underlying issues that allowed that mistake to happen. Perhaps you need better testing, or better guardrails around your critical systems. Drill down to those systemic gaps and fix them. Designing a robust post-mortem process could be the subject of its own article, but it’s safe to say that having one will go a long way toward preventing the next outage. ... If engineers have a perception that only new features lead to raises and promotions, reliability work will take a back seat. Most engineers should be contributing to operational excellence, regardless of seniority. Reward reliability improvements in your performance reviews. Hold your senior-most engineers accountable for the stability of the systems they oversee.


Ransomware siege: Who’s targeting India’s digital frontier?

Small and medium-sized businesses (SMBs) are often the most vulnerable. This past July, a ransomware attack forced over 300 small Indian banks offline, cutting off access to essential financial services for millions of rural and urban customers. This disruption has severe consequences in a country where digital banking and online financial services are becoming lifelines for people’s day-to-day transactions. According to a report by Kaspersky, 53% of Indian SMBs experienced ransomware attacks in 2023, with 559 million attacks occurring between April and May of this year, making them the most targeted segment. ... For SMBs, the cost of paying ransomware, retrieving proprietary data, returning to full operations, and recovering lost revenue can be too much to bear. For this reason, many businesses opt to pay the ransom, even when there is no guarantee that their data will be fully restored. The Indian financial sector, in particular, has been a favourite target. This year the National Payment Corporation of India (NPCI), which runs the country’s digital payment systems, was forced to take systems offline temporarily due to an attack. Beyond the financial impact, these incidents erode trust in India’s push for a digital-first economy, impacting the country’s progress toward digital banking adoption.


What AMD and Intel’s Alliance Means for Data Center Operators

AMD and Intel’s alliance was a surprise for many. But industry analysts said their partnership makes sense and is much needed, given the threat that Arm poses in both the consumer and data center space. While x86 processors still dominate the data center space, Arm has made inroads with cloud providers Amazon Web Services, Google Cloud and Microsoft Azure building their own Arm-based CPUs and startups like Ampere having entered the market in recent years. Intel and AMD’s partnership confirms how strong Arm is as a platform in the PC, data center and smartphone markets, the Futurum Group's Newman said. But the two giant chipmakers still have the advantage of having a huge installed base and significant market share. Through the new x86 advisory group, AMD and Intel can benefit by making it easier for data center operators to leverage x86, he said. “This partnership is about the experience of the x86 customer base, trying to make it stickier and trying to give them less reason to potentially move off of the platform is valuable,” Newman said. “x86’s longevity will benefit meaningfully from less complexity and making it easier for customers.”


Cyber resilience is improving but managing systemic risk will be key

“Cyber insurance is recognised as a core component of a robust cyber risk management strategy. While we have seen fluctuations in cyber rates and capacity over the last five years, more recently we have seen rates softening in the market,” Cotelle said. “The emergence and adoption of AI has clear potential to revolutionise how businesses operate, which will create new opportunities but also new exposures. “In the cyber risk context, AI is a double-edged sword. First, it can be exploited by threat actors to conduct more sophisticated attacks between agencies to address ransomware,” he said. ... He stressed, however, that one of the biggest challenges facing the cyber market is how it understands and manages systemic cyber risks. He said there is a case for considering the use of reinsurance pools and public/private partnerships to do this. “The continued attractiveness of the cyber insurance solution is paramount to the sustainability and growth of the market. “In recent years, we have seen work by insurers to clarify particular aspects of coverage relating to areas such as cyber-related property damage, cyber war or infrastructure which has led to coverage restrictions.”


Cyber resilience vs. cybersecurity: Which is more critical?

A common misconception is that cyber resilience means strong cybersecurity and that the organization won’t be compromised because their defenses are impenetrable. No defense is ever 100 percent secure because IT products have flaws and cybercriminals, and nation state-sponsored threat actors are continually changing their tactics, techniques and procedures (TTPs) to take advantage of any weaknesses they can find. And, of course, any organization with cyber resilience still needs quality cyber security in the first place. Resilience isn’t promising that bad things won’t happen; resilience promises that when they do, the organization can overcome that and continue to thrive. Cybersecurity is one of the foundations upon which resilience stands. Although cyber threats have increased in frequency and sophistication in recent years, there’s a huge amount that businesses in every sector can do to reduce the chances of being compromised and to prepare for the worst. The investment in time, energy and resources to prepare for a cyber incident is well worth it for the results you’ll see. Being cyber resilient is becoming a selling point as well. 


Building Digital Resilience: Insider Insights for a Safer Cyber Landscape

These “basics” sound simple and are not difficult to implement, but we (IT, Security teams, and the Business) routinely fail at it. We tend to focus on the fancy new tool, the shiny new dashboard, quarterly profits, or even the latest analytical application. Yes, these are important and have their place, but we should ensure we have the “basics” down to protect the business so it can focus on profit and growth. Using patching as an example, if we can patch our prioritized vulnerabilities promptly, we reduce our threat landscape, which, in turn, offers attackers fewer doors and windows into our environment. The term may seem a little dated, but defense in depth is a solid method used to defend our often-porous environments. Using multiple levels of security, such as strong passwords, multi-factor authentication, resilience training, and patching strategies, makes it harder for threat actors, so they tend to move to another target with weaker defenses. ... In an increasingly digital world, robust recovery capabilities are not just a safety net but a strategic advantage and a tactical MUST. The actions taken before and after a breach are what truly matter to reduce the costliest impacts—business interruption. 


Information Integrity by Design: The Missing Piece of Values-Aligned Tech

To have any chance of fixing our dysfunctional relationship with information, we need solutions that can take on the powerful incentives, integration scale, and economic pull of the attention economy as we know it, and realign the market. One good example is the emerging platform Readocracy, designed from the outset with features that allow users to have much more control and context over their information experience. This includes offering users control over the algorithm, providing nudges to direct attention more mindfully, and providing information on how informed commenters are on subjects on which they are commenting. ... An information integrity by design initiative can focus on promoting the six components of information integrity outlined above so readers and researchers can make informed decisions on the integrity of the information provided. Government promotion and support can drive and support corporate adoption of the concept much like it's done for security by design, privacy by design, and, most recently, safety by design. ... Information integrity deserves fierce advocacy from governments, the intellectual ingenuity of civil society, and the creative muscle of industry. 


The backbone of security: How NIST 800-88 and 800-53 compliance safeguards data centers

When discussing data center compliance, it’s important to not leave out an important player: the National Institute of Standards and Technology (NIST). NIST is one of the most widely recognized and adopted cybersecurity frameworks, is the industry’s most comprehensive and in-depth set of framework controls, and is a non-regulatory federal agency. NIST’s mission is to educate citizens on information system security for all applications outside of national security, including industry, government, academia, and healthcare on both a national and global scale. Their strict and robust standards and guidelines are widely recognized and adopted by both data centers and government entities alike seeking to improve their processes, quality, and security. ... NIST 800-88 covers various types of media, including hard drives (HDDs), solid-state drives (SSDs), magnetic tapes, optical media, and other media storage devices. NIST 800-88 has quickly become the utmost standard for the U.S. Government and has been continuously referenced in federal data privacy laws. More so, NIST 800-88 regulations have been increasingly adopted by private companies and organizations, especially data centers. 



Quote for the day:

"To have long-term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - October 19, 2024

DevSecOps: Building security into each stage of development

While it is important and becoming invaluable, it’s difficult to know how well open-source code has been maintained, Faus noted. A developer might incorporate third-party code and inadvertently introduce a vulnerability. DevSecOps allows security teams to flag that vulnerability and work with the development team to identify whether the code should be written differently or if the vulnerability is even dangerous. Ultimately, all parties can assure that they did everything they could to produce the most secure code possible. In both DevOps and DevSecOps, “the two primary principles are collaboration and transparency,” Faus said. Another core tenet is automation, which creates repeatability and reuse. If a developer knows how to resolve a specific vulnerability, they can reuse it across every other project with that same vulnerability. ... One of the biggest challenges in implementing security throughout the development cycle is the legacy mindset in how security is treated, Faus pointed out. Organizations must be willing to embrace cultural change and be open, transparent, and collaborative about fixing security issues. Another challenge lies in building in the right type of automation. “One of the first things is to make security a requirement for every new project,” Faus said.


Curb Your Hallucination: Open Source Vector Search for AI

Vector search—especially implementing a RAG approach utilizing vector data stores—is a stark alternative. Instead of relying on a traditional search engine approach, vector search uses the numerical embeddings of vectors to resolve queries. Therefore, searches examine a limited data set of more contextually relevant data. The results include improved performance, earned by efficiently utilizing massive data sets, and greatly decreased risk of AI hallucinations. At the same time, the more accurate answers that AI applications provide when backed by vector search enhance the outcomes and value delivered by those solutions. Combining both vector and traditional search methods into hybrid queries will give you the best of both worlds. Hybrid search ensures you cover all semantically related context, and traditional search can provide the specificity required for critical components ... Several open source technologies offer an easy on-ramp to building vector search capabilities and a path free from proprietary expenses, inflexibility, and vendor lock-in risks. To offer specific examples, Apache Cassandra 5.0, PostgreSQL (with pgvector), and OpenSearch are all open source data technologies that now offer enterprise-ready vector search capabilities and underlying data infrastructure well-suited for AI projects at scale.


Driving Serverless Productivity: More Responsibility for Developers

First, there are proactive controls, which prevent deployment of non-compliant resources by instilling best practices from the get-go. Second, detective controls, which identify violations that are already deployed, and then provide remediation steps. It’s important to recognize these controls must not be static. They need to evolve over time, just as your organization, processes, and production environments evolve. Think of them as checks that place more responsibility on developers to meet high standards, and also make it far easier for them to do so. Going further, a key -- and often overlooked -- part of any governance approach is its notification and supporting messaging system. As your policies mature over time, it is vitally important to have a sense of lineage. If we’re pushing for developers to take on more responsibility, and we’ve established that the controls are constantly evolving and changing, notifications cannot feel arbitrary or unsupported. Developers need to be able to understand the source of the standard driving the control and the symptoms of what they’re observing.


Mastering Observability: Unlocking Customer Insights

If we do something and the behaviour of our users changes in a negative way, if they start doing things slower, less efficiently, then we're not delivering value to the market. We're actually damaging the value we're delivering to the market. We're disrupting our users' flows. So a really good way to think about whether we are creating value or not is how is the behavior of our users, of our stakeholders or our customers changing as a result of us shipping things out? And this kind of behavior change is interesting because it is a measurement to whether we are solving the problem, not whether we're delivering a solution. And from that perspective, I can then offer five different solutions for the same behavior change. I can say, "Well, if that's the behavior change we want to create, this thing you proposed is going to cost five men millennia to make, but I can do it with a shell script and it's going to be done tomorrow. Or we can do it with an Excel export or we can do it with a PDF or we can do it through a mobile website not building a completely new app". And all of these things can address the same behavior change.


AI-generated code is causing major security headaches and slowing down development processes

The main priorities for DevSecOps in terms of security testing were the sensitivity of the information being handled, industry best practice, and easing the complexity of testing configuration through automation, all cited by around a third. Most survey respondents (85%) said they had at least some measures in place to address the challenges posed by AI-generated code, such as potential IP, copyright, and license issues that an AI tool may introduce into proprietary software. However, fewer than a quarter said they were ‘very confident' in their policies and processes for testing this code. ... The big conflict here appears to be security versus speed considerations, with around six-in-ten reporting that security testing significantly slows development. Half of respondents also said that most projects are still being added manually. Another major hurdle for teams is the dizzying number of security tools in use, the study noted. More than eight-in-ten organizations said they're using between six and 20 different security testing tools. This growing array of tools makes it harder to integrate and correlate results across platforms and pipelines, respondents noted, and is making it harder to distinguish between genuine issues and false positives.


How digital twins are revolutionising real-time decision making across industries

Despite the promise of digital twins, Bhonsle acknowledges that there are challenges to adoption. “Creating and maintaining a digital twin requires substantial investments in infrastructure, including sensors, IoT devices, and AI capabilities,” he points out. Security is another concern, particularly in industries like healthcare and energy, where compromised data streams could lead to life-threatening consequences. However, Bhonsle emphasises that the rewards far outweigh the risks. “As digital twin technology matures, it will become more accessible, even to smaller organisations, offering them a competitive edge through optimised operations and data-driven decisions.” ... Digital twins are transforming how businesses operate by providing real-time insights that drive smarter decisions. From manufacturing floors to operating rooms, and from energy grids to smart cities, this technology is reshaping industries in unprecedented ways. As Bhonsle aptly puts it, “The rise of digital twins signals a new era of efficiency and agility—an era where decisions are no longer based on assumptions but driven by data in real time.” As organisations embrace this evolving technology, they unlock new opportunities to optimise performance and stay ahead in a fast-changing world.


AI and tech in banking: Half the industry lags behind

Gupta emphasised that a superficial approach to digitalisation—what he called “putting lipstick on a pig”—is common in many institutions. These banks often adopt digital tools without rethinking the processes behind them, resulting in inefficiencies and missed opportunities for transformation. In addition, the culture of risk aversion in many financial institutions makes them slow to experiment with new technologies. According to a Deloitte survey, 62% of banking executives cited corporate culture as the biggest barrier to successful digital transformation. A fear of regulatory hurdles and data privacy issues also compounds this reluctance to fully embrace AI. ... The rise of fintech companies is also reshaping the financial landscape. Digital-first challengers like Revolut and Monzo are making waves by offering streamlined, customer-centric services that appeal to tech-savvy users. These companies, unencumbered by legacy systems, have been able to rapidly adopt AI, providing highly personalised products and services through their digital platforms. The UK fintech sector alone saw record investment in 2021, with $11.6 billion pouring into the industry, according to Innovate Finance. This influx of capital has enabled fintech firms to invest in AI technologies, providing stiff competition to traditional banks that are slower to adapt. 


The Era Of Core Banking Transformation Trade-offs Is Finally Over

There must be a better way than forcing banks to choose their compromises. Banking today needs a next-generation solution that blends the best of configuration neo cores – speed to market, lower cost, derisked migration – and combines it with the benefits of framework neo cores – full customization of products and even the core, with massive scale built in as standard. If banks and financial services aren’t forced to compromise because of their choice of cloud-native core solution, they can accelerate their innovation. Our research reveals that, while AI remains front of mind for many IT decision makers in banking and financial services, only one in three (32%) have so far integrated AI into their core solution. This is concerning. According to (McKinsey), banking is one of the top four sectors set to capitalize on that market opportunity – but that forecast will remain a pipe dream if banks can’t integrate AI efficiently, securely and at massive scale. ... One thing is certain, whether configuration or framework, neo cores are not the final destination for banking. They have been a helpful stepping stone over the last decade to cloud-native technology, but banks and financial services now need a next-generation core technology that doesn’t demand so many compromises. 


10 Risks of IoT Data Management

IoT data management faces significant security risks due to the large attack surface created by interconnected devices. Each device presents a potential entry point for cyberattacks, including data breaches and malware injections. Attackers may exploit vulnerabilities in device firmware, weak authentication methods, or unsecured network protocols. To mitigate these risks, implementing end-to-end encryption, device authentication, and secure communication channels is essential. ... IoT devices often collect sensitive personal information, which raises concerns about user privacy. The lack of transparency in how data is collected, processed, and shared can erode user trust, especially when data is shared with third parties without explicit consent. Addressing privacy concerns requires the anonymization and pseudonymization of personal data. ... The massive influx of data generated by IoT devices can overwhelm traditional storage systems, leading to data overload. Managing this data efficiently is a challenge, as continuous data generation requires significant storage capacity and processing power. To solve this, organizations can adopt edge computing, which processes data closer to the source, reducing the need for centralized storage. 


Managing bank IT spending: Five questions for tech leaders

The demand for development resources and the need to manage tech debt are only expected to increase. Tech talent has never been cheap, and inflation is pushing up salaries. Cybersecurity threats are also becoming more urgent, demanding greater funds to address them. And figuring out how to integrate generative AI takes time, personnel, and money. Despite these competing priorities and challenges, bank IT leaders have an opportunity to make their mark on their organizations and position themselves as central to their success—if they can address some key problems. ... In our experience, IT leaders should never underestimate the importance of controlling and reducing tech debt whenever possible. Actions to correct course could include conducting frequent assessments to determine which areas are accumulating tech debt and developing plans to reduce it as much as possible. More than many other industries, banking is a hotbed of new app development. Leaders who address these key questions can ensure they are directing their talent and resources to game-changing app development that directly contributes to their bank’s bottom line.



Quote for the day:

“It's failure that gives you the proper perspective on success.” -- Ellen DeGeneres

Daily Tech Digest - October 18, 2024

Breaking Barriers: The Power of Cross-Departmental Collaboration in Modern Business

In an era of rapid change and increasing complexity, cross-departmental collaboration is no longer a luxury but a necessity. By dismantling silos, fostering trust, and leveraging technology, organizations can unlock their full potential, drive innovation, and enhance customer satisfaction. While industry leaders have shown the way, the journey to a truly collaborative culture requires sustained effort and adaptation. To embark on this collaborative journey, organizations must prioritize collaboration as a core value, invest in leadership development, empower employees, leverage technology, and measure progress. Creating a collaborative culture is like building a bridge between departments: it requires strong foundations, continuous maintenance, and a shared vision. By doing so, they can create a culture where innovation thrives, employees are engaged, and customers benefit from improved products and services. Looking ahead, successful organizations will not only embrace collaboration but also anticipate its evolution in response to emerging trends like remote work, artificial intelligence, and data privacy. By proactively addressing these challenges and opportunities, businesses can position themselves as leaders in the collaborative economy.


Singapore releases guidelines for securing AI systems and prohibiting deepfakes in elections

"AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system," said Singapore's Cyber Security Agency (CSA). "The adoption of AI can also exacerbate existing cybersecurity risks to enterprise systems, [which] can lead to risks such as data breaches or result in harmful, or otherwise undesired model outcomes." "As such, AI should be secure by design and secure by default, as with all software systems," the government agency said. ... "The Bill is scoped to address the most harmful types of content in the context of elections, which is content that misleads or deceives the public about a candidate, through a false representation of his speech or actions, that is realistic enough to be reasonably believed by some members of the public," Teo said. "The condition of being realistic will be objectively assessed. There is no one-size-fits-all set of criteria, but some general points can be made." These encompass content that "closely match[es]" the candidates' known features, expressions, and mannerisms, she explained. The content also may use actual persons, events, and places, so it appears more believable, she added.


2025 and Beyond: CIOs' Guide to Stay Ahead of Challenges

As enterprises move beyond the "experiment" or the "proof of concept" stage, it is time to design and formalize a well-thought-out AI strategy that is tailored to their unique business needs. According to Gartner, while 92% of CIOs anticipate AI will be integrated into their organizations by 2025 - broadly driven by increasing pressure from CEOs and boards - 49% of leaders admit their organizations struggle to assess and showcase AI's value. That's where the strategy kicks in. ... Forward-looking CIOs are focused on using data for decision-making while tackling challenges related to its quality and availability. Data governance is a crucial aspect to deal with. As data systems become more interconnected, managing complexity is crucial. Going forward, CIOs will have to focus on optimizing current systems, high level of data literacy, complexity management and establishing strong governance. The importance of shifting IT from a cost center to a profit driver lies in focusing on data-driven revenue generation, said Eric Johnson ... CIOs should be able to communicate the strategic use of IT investment and present it as a core enabler for competitiveness. 


5 Ways to Reduce SaaS Security Risks

It's important to understand what corporate assets are visible to attackers externally and, therefore, could be a target. Arguably, the SaaS attack surface extends to every SaaS, IaaS and PaaS application, account, user credential, OAuth grant, API, and SaaS supplier used in your organization—managed or unmanaged. Monitoring this attack surface can feel like a Sisyphean task, given that any user with a credit card, or even just a corporate email address, has the power to expand the organization's attack surface in just a few clicks. ... Single sign-on (SSO) provides a centralized place to manage employees' access to enterprise SaaS applications, which makes it an integral part of any modern SaaS identity and access governance program. Most organizations strive to ensure that all business-critical applications (i.e., those that handle customer data, financial data, source code, etc.) are enrolled in SSO. However, when new SaaS applications are introduced outside of IT governance processes, this makes it difficult to truly assess SSO coverage. ... Multi-factor authentication adds an extra layer of security to protect user accounts from unauthorized access. By requiring multiple factors for verification, such as a password and a unique code sent to a mobile device, it significantly decreases the chances of hackers gaining access to sensitive information. 


World’s smallest quantum computer unveiled, solves problems with just 1 photon

In the new study, the researchers successfully implemented Shor’s algorithm using a single photon by encoding and manipulating 32 time-bin modes within its wave packet. This achievement highlights the strong information-processing capabilities of a single photon in high dimensions. According to the team, with commercially available electro-optic modulators capable of 40 GHz bandwidth, it is feasible to encode over 5,000 time-bin modes on long single photons. While managing high-dimensional states can be more challenging than working with qubits, this work demonstrates that these time-bin states can be prepared and manipulated efficiently using a compact programmable fiber loop. Additionally, high-dimensional quantum gates can enhance manipulation, using multiple photons for scalability. Reducing the number of single-photon sources and detectors can improve the efficiency of counting coincidences over accidental counts. Research indicates that high-dimensional states are more resistant to noise in quantum channels, making time-bin-encoded states of long single photons promising for future high-dimensional quantum computing.


Google creates the Mother of all Computers: One trillion operations per second and a mystery

The capability of exascale computing to handle massive amounts of data and run through simulation has created new avenues for scientific modeling. From mimicking black holes and the birth of galaxies to introducing newer and evolved treatments and diagnoses through customized genome mapping across the globe, this technology has all the potential to burst open newer frontiers of knowledge about the cosmos. While current supercomputers would otherwise spend years solving computations, exascale machines will pave the way to areas of knowledge that were previously uncharted. For instance, the exascale solution in astrophysics holds the prospect of modeling many phenomena, such as star and galaxy formation, with higher accuracy. For example, these simulations could reveal new detections of the fundamental laws of physics and be used to answer questions about the universe’s formation. In addition, in fields like particle physics, researchers could analyze data from high-energy experiments far more efficiently and perhaps discover more about the nature of matter in the universe. AI is another area to benefit from exascale computing for a supercharge in performance. Present models of AI are very efficient, but the current computing machines constrain them. 


Taming the Perimeter-less Nature of Global Area Networks

The availability of data and intelligence from across the global span of the network is significantly effective in helping ITOps teams understand all the component services and providers their business has exposure to or reliance on. It means being able to pinpoint an impending problem or the root cause of a developing issue within their global area network and to pursue remediation with the right third-party provider ... Certain traffic engineering actions taken on owned infrastructure can alter connectivity and performance by altering the path that traffic takes in the unowned infrastructure portion of the global area network. Consider these actions as adjustments to a network segment that is within your control, such as a network prefix or a BGP route change to bypass a route hijack happening downstream in the unowned Internet-based segment. These traffic engineering actions are manageable tasks that ITOps teams or their automated systems can execute within a global area network setup. While they are implemented in the parts of the network directly controlled by ITOps, their impact is designed to span the entire service delivery chain and its performance. 


Firms use AI to keep reality from unreeling amid ‘global deepfake pandemic’

Seattle-based Nametag has announced the launch of its Nametag Deepfake Defense product. A release quotes security technologist and cryptography expert Bruce Schneier, who says “Nametag’s Deepfake Defense engine is the first scalable solution for remote identity verification that’s capable of blocking the AI deepfake attacks plaguing enterprises.” And make no mistake, says Nametag CEO Aaron Painter: “we’re facing a global deepfake pandemic that’s spreading ransomware and disinformation.” The company cites numbers from Deloitte showing that over 50 percent of C-suite executives expect an increase in the number and size of deepfake attacks over the next 12 months. Deepfake Defense consists of three core proprietary technologies: Cryptographic Attestation, Adaptive Document Verification and Spatial Selfie. The first “blocks digital injection attacks and ensures data integrity using hardware-backed keystore assurance and secure enclave technology from Apple and Google.” The second “prevents ID presentation attacks using proprietary AI models and device telemetry that detect even the most sophisticated digital manipulation or forgery.” 


Evolving Data Governance in the Age of AI: Insights from Industry Experts

While evolving existing data governance to meet AI needs is crucial, many organizations need to advance their DG first, before delving into AI governance. Existing data quality does not cover AI requirements. As mentioned in the previous section, currently DG programs enforce roles, procedures and tools for some structured data throughout the company. Yet AI models learn from and use very large data sets, containing structured and unstructured data. All this data needs to be of good quality too, so that the AI model can respond accurately, completely, consistently, and relevantly. Companies frequently struggle to determine if their unstructured data, including videos and PowerPoint slides, is of sufficient quality for AI training and implementation. If organizations don’t address this issue, Haskell said, they “throw dollars at AI and AI tools,” because the bad data quality inputted gets outputted. For this reason, the pressures of data quality fundamentals and clean-up take higher importance over the drive to implement AI. O’Neal likened AI and its governance to an iceberg. The CEO and senior management see only the tip, visible with all of AI’s promise and reward. 


On the Road to 2035, Banking Will Walk One of These Three Paths

Economist Impact’s latest report walks through three different potential scenarios that the banking sector will zero in on by 2035. Each paints a vivid picture of how technological advancements, shifting consumer expectations and evolving global dynamics could reshape the financial world as we know it. ... Digital transformation will be central to banking’s future, regardless of which scenario unfolds. Banks that fail to innovate and adapt to new technologies risk becoming obsolete. Trust will be a critical currency in the banking sector of 2035. Whether it’s through enhanced data protection, ethical AI use, or commitment to sustainability, banks must find ways to build and maintain customer trust in an increasingly complex world. The role of banks is likely to expand beyond traditional financial services. In all scenarios, we see banks taking on new responsibilities, whether it’s driving sustainable development, bridging geopolitical divides, or serving as the backbone for broader digital ecosystems. Flexibility and adaptability will be crucial for success. The future is uncertain and potentially fragmented, requiring banks to be agile in their strategies and operations to thrive in various possible environments.



Quote for the day:

"The secret of my success is a two word answer: Know people." -- Harvey S. Firestone

Daily Tech Digest - October 17, 2024

Digital addiction detox: Streamline tech to maximize impact, minimize risks

While digital addiction has been extensively studied at the individual level, organizational digital addiction is a relatively new area of concern. This addiction manifests as a tendency for the organization to throw technology mindlessly at any problem, often accumulating useless or misused technologies that generate ongoing costs without delivering proportional value. ... CIOs must simultaneously implement controls to prevent their organizations from reaching a tipping point where healthy exploration transforms into digital addiction. Striking this balance is delicate and requires careful management. Many innovative technology companies have found success by implementing “runways” for new products or technologies. These runways come with specific criteria for either “takeoff” or “takedown”. ... Unchecked technology adoption poses significant risks to organizations, often leading to vulnerabilities in their IT ecosystems. When companies rush to implement technologies without proper planning and safeguards, they lack the resilience to bounce back from adverse conditions because of insufficient redundancy and flexibility within systems, leaving organizations exposed to single points of failure.


Why are we still confused about cloud security?

A prevalent issue is publicly exposed storage, which often includes sensitive data due to excessive permissions, making it a prime target for ransomware attacks. Additionally, the improper use of access keys remains a significant threat, with a staggering 84% of organizations retaining unused highly privileged keys. Such security oversights have historically facilitated breaches, as evidenced by incidents like the MGM Resorts data breach in September 2023. ... Kubernetes environments present another layer of risk. The study notes that 78% of organizations have publicly accessible Kubernetes API servers, with significant portions allowing inbound internet access and unrestricted user control. This lax security posture exacerbates potential vulnerabilities. Addressing these vulnerabilities demands a comprehensive approach. Organizations should adopt a context-driven security ethos by integrating identity, vulnerability, misconfiguration, and data risk information. This unified strategy allows for precise risk assessment and prioritization. Managing Kubernetes access through adherence to Pod Security Standards and limiting privileged containers is essential, as is the regular audit of credentials and permissions to enforce the principle of least privilege.


The Architect’s Guide to Interoperability in the AI Data Stack

At the heart of an AI-driven world is data — lots of it. The choices you make today for storing, processing and analyzing data will directly affect your agility tomorrow. Architecting for interoperability means selecting tools that play nicely across environments, reducing reliance on any single vendor, and allowing your organization to shop for the best pricing or feature set at any given moment. ... Interoperability extends to query engines as well. Clickhouse, Dremio and Trino are great examples of tools that let you query data from multiple sources without needing to migrate it. These tools allow users to connect to a wide range of sources, from cloud data warehouses like Snowflake to traditional databases such as MySQL, PostgreSQL and Microsoft SQL Server. With modern query engines, you can run complex queries on data wherever it resides, helping avoid costly and time-consuming migrations. ... Architecting for interoperability is not just about avoiding vendor lock-in; it’s about building an AI data stack that’s resilient, flexible and cost-effective. By selecting tools that prioritize open standards, you ensure that your organization can evolve and adapt to new technologies without being constrained by legacy decisions. 


The role of compromised cyber-physical devices in modern cyberattacks

A cyber physical device is a device that connects the physical world and computer networks. Many people may associate the term “cyber physical device” with Supervisory Control and Data Acquisition (SCADA) systems and OT network segments, but there’s more to it. Devices that interconnect the physical world give attackers a unique perspective: they allow them to perform on-ground observation of events, to monitor and observe the impact of their attacks, and can even sometimes make an impact on the physical world ... Many devices are compromised for the simple purpose of creating points of presence at new locations, so attackers can bypass geofencing restrictions. These devices are often joined and used as a part of overlay networks. Many of these devices are not traditional routers but could be anything from temperature sensors to cameras. We have even seen compromised museum Android display boards in some countries. ... Realistically, I don’t believe there is a way to decrease number of compromised devices. We are moving towards networks where IoT devices will be one of the predominant types of connected devices, with things like a dish washer or fridge having an IP address. 


Security at the Edge Needs More Attention

CISOs should verify that the tools they acquire and use do what they claim to do, or they may be in for surprises. Meanwhile, data and IP are at risk because it’s so easy to sign up for and use third-party cloud services and SaaS that the average users may not associate their data usage with organizational risk. “Users submitting spreadsheet formula problems to online help forms may inadvertently be sharing corporate data. People running grammar checking tools on emails or documents may be doing the same,” says Roger Grimes, data-driven defense evangelist at security awareness training and simulated phishing platform KnowBe4 in an email interview. “It's far too easy for someone using an AI-enabled tool to not realize they are inadvertently leaking confidential information outside their organizational environment.”  ... It’s important for CISOs to have knowledge of and visibility into every asset in their company’s tech stack, though some CISOs see room for improvement. “You spend a lot of time and money on people, processes and technology to develop a layered security approach and defense in depth, and that doesn't work if you don't know you have something to defend there,” says Fowler.


CIOs must also serve as chief AI officers, according to Salesforce survey

CIOs are now in the business of manufacturing intelligence and work-autonomous work. CIOs are now responsible for creating a work environment where humans and AI agents can collaborate and co-create stakeholder value -- employees, customers, partners, and communities. CIOs must design, own, and deliver the roadmap to the autonomous enterprise, where autonomous work is maturing at Lightspeed. ... CIOs are under pressure to quickly learn about, and implement, effective AI solutions in their businesses. While more than three of five CIOs think stakeholder expectations for their AI expertise are unrealistic, only 9% believe their peers are more knowledgeable. CIOs are also partnering with analyst firms (Gartner, Forrester, IDC, etc.) and technology vendors to learn more about AI. ... Sixty-one percent of CIOs feel they're expected to know more about AI than they do, and their peers at other companies are their top sources of information. CIOs must become better AI storytellers. In 1994, Steve Jobs said: "The most powerful person in the world is the storyteller. The storyteller sets the vision, values, and agenda of an entire generation that is to come." There is no better time than now for CIOs to lead the business transformation towards becoming AI-led companies.


Policing and facial recognition: What’s stopping them?

The question contains two “ifs” and a presumption; all are carrying a lot of weight. The first “if” is the legal basis for using FRT. Do the police have the power to use it? In England and Wales the police certainly have statutory powers to take and retain images of people, along with common law powers to obtain and store information about the citizen’s behavior in public. The government’s own Surveillance Camera Code of Practice (currently on policy’s death row) provides guidance to chief officers on how to do this and on operating overt surveillance systems in public places generally. The Court of Appeal found a “sufficient legal framework” covered police use of FRT, one that was capable of supporting its lawful deployment. ... The second “if” relates to the technology i.e. “if FRT works, what’s stopping the police from using it?” Since a shaky introduction around 2015 when it didn’t work as hoped (or required) police facial recognition technology has come on significantly. The accuracy of the technology is much better but is it accurate to say it now “works”? Each technology partner and purchasing police force must answer that for themselves – as for any other operational capability. That’s accountability. 


How AI is becoming a powerful tool for offensive cybersecurity practitioners

What makes offensive security all the more important is that it addresses a potential blind spot for developers. “As builders of software, we tend to think about using whatever we’ve developed in the ways that it’s intended to be used,” says Caroline Wong, chief strategy officer at Cobalt Labs, a penetration testing company. In other words, Wong says, there can be a bias towards overemphasizing the good ways in which software can be used, while overlooking misuse and abuse cases or disregarding potentially harmful uses. “One of the best ways to identify where and how an organization or a piece of software might be susceptible to attack is by taking on the perspective of a malicious person: the attacker’s mindset,” Wong says. ... In addition to addressing manpower issues, AI can assist practitioners in scaling up their operations. “AI’s ability to process vast datasets and simulate large-scale attacks without human intervention allows for testing more frequently and on a broader scale,” says Augusto Barros, a cyber evangelist at Securonix, a security analytics and operations management platform provider. “In large or complex environments, human operators would struggle to perform consistent and exhaustive tests across all systems,” Barros says. 


While Cyberattacks Are Inevitable, Resilience Is Vital

Cybersecurity is all about understanding risk and applying the basic controls and sprinkling in new technologies to keep the bad guys out and keeping the system up and running by eliminating as much unplanned downtime as possible. “Cybersecurity is a risk game—as long as computers are required to deliver critical products and services, they will have some vulnerability to an attack,” Carrigan said. “Risk is a simple equation: Risk = Likelihood x Consequence. Most of our investments have been in reducing the ‘likelihood’ side of the equation. The future of OT cybersecurity will be in reducing the consequences of cyberattacks—specifically, how to minimize the impact of infiltration and restore operations within an acceptable period.” Manufacturers must understand their risk appetite and know what and where their organization’s crown jewels are and how to protect them. “Applying the same security practices to all OT assets is not practical—some are more important than others, even within the same company and the same OT network,” Carrigan said. Remaining resilient to a cyber incident—any kind of incident—means manufacturers must apply the basics, sprinkle in some new technologies and plan, test, revise and then start that process all over again. 


AI-Powered DevOps: Best Practices for Business Adoption

In security, AI tools are proving highly effective at proactively identifying and addressing vulnerabilities, boosting threat detection capabilities, and automating responses to emerging risks. Nonetheless, significant potential for AI remains in phases such as release management, deployment, platform engineering, and planning. These stages, which are crucial for ensuring software stability and scalability, could greatly benefit from AI's predictive abilities, resource optimization, and the streamlining of operational and maintenance processes. ... While generative AI and AI copilots have been instrumental in driving adoption of this technology, there remains a major shortage of AI expertise within DevOps. This gap is significant, especially given that humans remain deeply involved in the process, with over two-thirds of our respondents indicating they manually review AI-generated outputs at least half the time. To address these challenges, organizations should devise specialized training courses to properly equip their DevOps teams with the skills to leverage AI tools. Whether through industry-recognized courses or internal programs, encouraging certification can enhance technical expertise significantly.



Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak