Daily Tech Digest - November 10, 2024

Technical Debt: An enterprise’s self-inflicted cyber risk

Technical debt issues vary in risk level depending on the scope and blast radius of the issue. Unaddressed high-risk technical debt issues create inefficiency and security exposure while diminishing network reliability and performance. There’s the obvious financial risk that comes from wasted time, inefficiencies, and maintenance costs. Adding tools potentially introduces new vulnerabilities, increasing the attack surface for cyber threats. A lot of the literature around technical debt focuses on obsolete technology on desktops. While this does present some risk, desktops have a limited blast radius when compromised. Outdated hardware and unattended software vulnerabilities within network infrastructure pose a much more imminent and severe risk as they serve as a convenient entry point for malicious actors with a much wider potential reach. An unpatched or end-of-life router, switch, or firewall, riddled with documented vulnerabilities, creates a clear path to infiltrating the network. By methodically addressing technical debt, enterprises can significantly mitigate cyber risks, enhance operational preparedness, and minimize unforeseen infrastructure disruptions. 


Why Your AI Will Never Take Off Without Better Data Accessibility

Data management and security challenges cast a long shadow over efforts to modernize infrastructures in support of AI and cloud strategies. The survey results reveal that while CIOs prioritize streamlining business processes through cloud infrastructures, improving data security and business resilience is a close second. Security is a persistent challenge for companies managing large volumes of file data and it continues to complicate efforts to enhance data accessibility. Nasuni’s research highlights that 49% of firms (rising to 54% in the UK) view security as their biggest problem when managing file data infrastructures. This issue ranks ahead of concerns such as rapid recovery from cyberattacks and ensuring data compliance. As companies attempt to move their file data to the cloud, security is again the primary obstacle, with 45% of all respondents—and 55% in the DACH region—citing it as the leading barrier, far outstripping concerns over cost control, upskilling employees and data migration challenges. These security concerns are not just theoretical. Over half of the companies surveyed admitted that they had experienced a cyber incident from which they struggled to recover. Alarmingly, only one in five said they managed to recover from such incidents easily. 


Exploring DORA: How to manage ICT incidents and minimize cyber threat risks

The SOC must be able to quickly detect and manage ICT incidents. This involves proactive, around-the-clock monitoring of IT infrastructure to identify anomalies and potential threats early on. Security teams can employ advanced tools such as security automation, orchestration and response (SOAR), extended detection and response (XDR), and security information and event management (SIEM) systems, as well as threat analysis platforms, to accomplish this. Through this monitoring, incidents can be identified before they escalate and cause greater damage. ... DORA introduces a harmonized reporting system for serious ICT incidents and significant cyber threats. The aim of this reporting system is to ensure that relevant information is quickly communicated to all responsible authorities, enabling them to assess the impact of an incident on the company and the financial market in a timely manner and respond accordingly. ... One of the tasks of SOC analysts is to ensure effective communication with relevant stakeholders, such as senior management, specialized departments and responsible authorities. This also includes the creation and submission of the necessary DORA reports.


What is Cyber Resilience? Insurance, Recovery, and Layered Defenses

While cyber insurance can provide financial protection against the fallout of ransomware, it’s important to understand that it’s not a silver bullet. Insurance alone won’t save your business from downtime, data loss, or reputation damage. As we’ve seen with other types of insurance, such as property or health insurance, simply holding a policy doesn’t mean you’re immune to risks. While cyber insurance is designed to mitigate financial risks, insurers are becoming increasingly discerning, often requiring businesses to demonstrate adequate cybersecurity controls before providing coverage. Gone are the days when businesses could simply “purchase” cyber insurance without robust cyber hygiene in place. Today’s insurers require businesses to have key controls such as multi-factor authentication (MFA), incident response plans, and regular vulnerability assessments. Moreover, insurance alone doesn’t address the critical issue of data recovery. While an insurance payout can help with financial recovery, it can’t restore lost data or rebuild your reputation. This is where a comprehensive cybersecurity strategy comes in — one that encompasses both proactive and reactive measures, involving components like third-party data recovery software.


Integrating Legacy Systems with Modern Data Solutions

Many legacy systems were not designed to share data across platforms or departments, leading to the creation of data silos. Critical information gets trapped in isolated systems, preventing a holistic view of the organization’s data and hindering comprehensive analysis and decision-making. ... Modern solutions are designed to scale dynamically, whether it’s accommodating more users, handling larger datasets, or managing more complex computations. In contrast, legacy systems are often constrained by outdated infrastructure, making it difficult to scale operations efficiently. Addressing this requires refactoring old code and updating the system architecture to manage accumulated technical debt. ... Older systems typically lack the robust security features of modern solutions, making them more vulnerable to cyber-attacks. Integrating these systems without upgrading security protocols can expose sensitive data to threats. Ensuring robust security measures during integration is critical to protect data integrity and privacy. ... Maintaining legacy systems can be costly due to outdated hardware, limited vendor support, and the need for specialized expertise. Integrating them with modern solutions can add to this complexity and expense. 


The challenges of hybrid IT in the age of cloud repatriation

The story of cloud repatriation is often one of regaining operational control. A recent report found that 25% of organizations surveyed are already moving some cloud workloads back on-premises. Repatriation offers an opportunity to address these issues like rising costs, data privacy concerns, and security issues. Depending on their circumstances, managing IT resources internally can allow some organizations to customize their infrastructure to meet these specific needs while providing direct oversight over performance and security. With rising regulations surrounding data privacy and protection, enhanced control over on-prem data storage and management provides significant advantages by simplifying compliance efforts. ... However, cloud repatriation can often create challenges of its own. The costs associated with moving services back on-prem can be significant: new hardware, increased maintenance, and energy expenses should all be factored in. Yet, for some, the financial trade-off for repatriation is worth it, especially if cloud expenses become unsustainable or if significant savings can be achieved by managing resources partially on-prem. Cloud repatriation is a calculated risk that, if done for the right reasons and executed successfully, can lead to efficiency and peace of mind for many companies.


IT Cost Reduction Strategies: 3 Unexpected Ways Enterprise Architecture Can Help

Easier said than done with the traditional process of manual follow-ups hampered by inconsistent documentation often scattered across many teams. The issue with documentation also often means that maintenance efforts are duplicated, wasting resources that could have been better deployed elsewhere. The result is the equivalent of around 3 hours of a dedicated employee’s focus per application per year spent on documentation, governance, and maintenance. Not so for the organization that has a digital-native EA platform that leverages your data to enable scalability and automation in workflows and messaging so you can reach out to the most relevant people in your organization when it's most needed. Features like these can save an immense amount of time otherwise spent identifying the right people to talk to and when to reach out to them, making a company's Enterprise Architecture the single source of truth and a solid foundation for effective governance. The result is a reduction of approximately a third of the time usually needed to achieve this. That valuable time can then be reallocated toward other, more strategic work within the organization. We have seen that a mid-sized company can save approximately $70 thousand annually by reducing its documentation and governance time.


How Rules Can Foster Creativity: The Design System of Reykjavík

Design systems have already gained significant traction, but many are still in their early stages, lacking atomic design structures. While this approach may seem daunting at first, as more designers and developers grow accustomed to working systematically, I believe atomic design will become the norm. Today, most teams create their own design systems, but I foresee a shift toward subscription-based or open-source systems that can be customized at the atomic level. We already see this with systems like Google’s Material UI, IBM’s Carbon, Shopify’s Polaris, and Atlassian’s design system. Adopting a pre-built, well-supported design system makes sense for many organizations. Custom systems are expensive and time-consuming to build, and maintaining them requires ongoing resources, as we learned in Reykjavík. By leveraging a tried-and-tested design system, teams can focus on customization rather than starting from scratch. ontrary to popular belief, this shift won’t stifle creativity. For public services, there is little need for extreme creativity regarding core functionality - these products simply need to work as expected. AI will also play a significant role in evolving design systems.


Eyes on Data: A Data Governance Study Bridging Industry and Academia

The researcher, Tony Mazzarella, is a seasoned data management professional and has extensive experience in data governance within large organizations. His professional and research observations have identified key motivations for this work: Data Governance has a knowledge problem. Existing literature and publications are overly theoretical and lack empirical guidance on practical implementation. The conceptual and practical entanglement of governance and management concepts and activities exacerbates this issue, leading to divergent definitions and perceptions that data governance is overly theoretical. The “people” challenges in data management are often overlooked. Culture is core to data governance, but its institutionalization as a business function coincided first in the financial services industry with a shift towards regulatory compliance in response to the 2008 financial crisis. “Data culture” has re-emerged in all industries, but it implies the governance function is tasked with fostering culture change rather than emphasizing that data governance requires a culture change, which is a management challenge. Data Management’s industry-driven nature and reactive ethos result in unnecessary change as the macroenvironment changes, undermining process resilience and sustainability.


The future of data center maintenance

Condition-based maintenance and advanced monitoring services provide operators with more information about the condition and behavior of assets within the system, including insights into how environmental factors, controls, and usage drive service needs. The ability to recommend actions for preventing downtime and extending asset life allows a focus on high-impact items instead of tasks that don't immediately affect asset reliability or lifespan. These items include lifecycle parts replacement, optimizing preventive maintenance schedules, managing parts inventories, and optimizing control logic. The effectiveness of a service visit can subsequently be validated as the actions taken are reflected in asset health analyses. ... Condition-based maintenance and advanced monitoring services include a customer portal for efficient equipment health reporting. Detailed dashboards display site health scores, critical events, and degradation patterns. ... The future of data center maintenance is here – smarter, more efficient, and more reliable than ever. With condition-based maintenance and advanced monitoring services, data centers can anticipate risks and benchmark assets, leading to improved risk management and enhanced availability.



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

Daily Tech Digest - November 09, 2024

How the infrastructure industry is leveraging AI and digital twins

The challenges in scaling up the adoption of AI-powered digital twins across the infrastructure sector are multifaceted. First, engineering firms often struggle to obtain clear requirements from owner-operators. While these firms manage design and sometimes construction, they rely on owner-operators to request a digital twin as part of the final infrastructure asset. However, this willingness to adopt digital twins is still lacking in some regions and sectors. Second, many engineering firms need more support due to the high demand for infrastructure. Cumins emphasizes, “This resource constraint makes it more difficult for firms to invest in and effectively implement AI-powered digital twins.” The increasing backlog of projects leaves little time for firms to adopt new technologies and change their workflows. The third and more fundamental challenge is access to historical data, which is crucial for training AI models. “For instance,” Cumins explains, “we train our AI agents using Bentley’s software, which teaches the rules of various engineering disciplines, such as structural and geotechnical engineering. Engineering firms can then fine-tune these AI agents using their historical data and project conditions.”


Serverless computing’s second act

Despite its early hurdles, serverless computing has bounced back, driven by a confluence of evolving developer needs and technological advancements. Major cloud providers such as AWS, Microsoft Azure, and Google Cloud have poured substantial resources into serverless technologies to provide enhancements that address earlier criticisms. For instance, improvements in debugging tools, better handling of cold starts, and new monitoring capabilities are now part of the serverless ecosystem. Additionally, integrating artificial intelligence and machine learning promises to expand the possibilities of serverless applications, making them seem more innovative and responsive. ... One crucial question remains: Is this resurgence enough to secure the future of serverless computing, or is it simply an attempt by cloud providers to recoup their significant investments? At issue is the number of enterprises that have invested in serverless application development. As you know, this investment goes beyond just paying for the serverless technology. Localizing your applications using this tech and moving to other platforms is costly. A temporary fix might not suffice in the long run. While the current trends and forecasts are promising, the final verdict will largely depend on how serverless can overcome past weaknesses and adapt to emerging technological landscapes and enterprise needs. 


GenAI’s Impact on Cybersecurity

GenAI is both a blessing and a curse when it comes to cybersecurity. “On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats,” says Erik Avakian, technical counselor at Info-Tech Research Group. “These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.” ... Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes. “The incorporation of voice impersonation and personalized content through ‘deepfake’ attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against,” says Avakian.


Has the Cybersecurity Workforce Peaked?

Jobseekers are likely also running afoul of the trend in ghost-job posting. Nearly half of hiring managers have admitted to keeping job postings open, even when they are not looking to fill a specific position. That's being used as a way to keep employees motivated, give the impression the company is growing, or to placate overworked employees, according to a survey conducted by Clarify Capital. These ghost jobs are a significant problem for cybersecurity job seekers in particular, with one resume site estimating that 46% of listings for a cybersecurity analyst in the United Kingdom were positions that would never be filled--compared with about a third for all roles. ... Those economic pressures are another reason that purported jobs are not materializing, says Jon Brandt, director of professional practices and innovation at ISACA, an information-technology certification organization. "People can respond to any survey and say, hey, we have a need for 20 more people," he says. "But at the end of the day, unless an organization is taking active steps to hire, then that's not a data point we should be looking at right now." For entry-level workers without significant experience, the picture is especially grim. Cyberseek's career pathway data shows that demand for workers resembles a reverse pyramid. 


You Have an SBOM — What Are the Next Steps?

To maximize SBOM benefits, integrate them into your SDLC and automate the process whenever possible. This ensures real-time updates, maintaining accuracy as your software evolves. Regular updates reduce the risk of outdated data, enhancing transparency and security. Automating SBOM creation by integrating them into CI/CD pipelines ensures an SBOM with each build, providing a reliable record of software components. By setting up quality gates in your CI/CD workflows, you can scan SBOMs for security vulnerabilities and licensing issues, stopping noncompliant components from moving forward in deployment. During quality assurance (QA), SBOMs are vital for ensuring compliance and security before release. They ensure each release meets industry standards and best practices. By integrating SBOMs into CI/CD and QA processes, development teams establish a robust framework for transparency and compliance, boosting software supply chain security at all stages. ... Effective SBOM management extends beyond the development phase. Once in production, SBOMs need to be continuously monitored to ensure ongoing security and compliance, especially as new vulnerabilities emerge.


AI-Powered Enterprise Architecture: A Strategic Imperative

AI can significantly enhance EA reusable knowledge repositories, architecture diagrams, and visualizations by analyzing real-time and historical projects, programs, solution designs, and other relevant data to identify anomalies, bottlenecks, and optimization opportunities in designing robust technology solutions. AI-powered solution design monitoring systems could detect a sudden increase in website traffic and automatically scale server resources to handle the increased load; such measures could have cost and end-user experience issues that could potentially impact business. Technical experts can apply the insights learned to ensure the future design of robust solutions by considering aspects of application behavior that may not have been considered. AI can streamline the architecture design process by generating multiple design options, simulating different scenarios, and optimizing designs based on performance and cost. Using generative design techniques, AI can create innovative and efficient solution design patterns that would be difficult or non-pragmatic to achieve through traditional methods. For example, an AI-powered design tool could generate multiple network designs, each with different topologies and configurations, and then evaluate the performance and cost of each design to identify the optimal solution.


How enterprises can identify and control AI sprawl

AI sprawl refers to the uncontrolled proliferation of AI tools across an organization. Just like with cybersecurity, there are too many tools solving too many things without centralized oversight. This leads to inefficiencies, redundancies, and significant security risks. For instance, various departments, like sales and marketing, might independently adopt different AI solutions for similar problems, but these solutions don’t integrate or align with each other. This increases costs and operational inefficiencies. AI sprawl also raises governance challenges, making it difficult to ensure data quality, consistency, and security. ... CIOs are in a unique position because they oversee multiple functions while CTOs tend to focus more on the engineering side of the product. At Nutanix, we’re adopting a centralized AI governance approach. We’ve established a cross-functional committee to take inventory of all existing AI tools and develop a unified strategy. This includes creating policies, frameworks, and best practices that align with the company’s overall objectives. ... With AI tools spread across an organization it’s difficult to ensure data quality and security. Each tool might store or process data in different ways, potentially exposing sensitive information and increasing the risk of compliance violations, such as GDPR breaches. 


Strengthening OT Cybersecurity in the Age of Industry 4.0

Historically, OT systems were not considered significant threats due to their perceived isolation from the Internet. Organizations relied on physical security measures, such as door locks, passcodes, and badge readers, to protect against hands-on access and disruption to physical operational processes. However, the advent of the 4th Industrial Revolution, or Industry 4.0, has introduced smart technologies and advanced software to optimize efficiency through automation and data analysis. This digital transformation has interconnected OT and IT systems, creating new attack vectors for adversaries to exploit and access sensitive data. ... First, security leaders should isolate OT networks from IT networks and the Internet to limit the attack surface and verify that the networks are segmented. This should be monitored 24/7 to ensure network segmentation effectiveness and proper functioning of security controls. This containment strategy helps prevent lateral movement within the network during a breach. Real-time network monitoring and the appropriate alert escalation (often notifying the plant supervisor or controls engineer who are in the best position to verify if access or a configuration change is appropriate and planned, not the IT SOC) aids in the rapid detection and response to threats. 


Tips for making sure your AI-powered FP&A efforts are successful

One of the biggest problems with AI is the issue of security. Many finance teams hesitate to embrace AI solutions out of concerns that they could undermine data privacy or weaken data security. Data security is important, as handling large amounts of sensitive information requires robust protection measures. These concerns are well-founded, too – last year, Samsung banned employees from using third-party GenAI tools after ChatGPT leaked sensitive data. International regulations are also catching up with AI and establishing requirements around data privacy and security. It’s important to build clear policies around data use, set up and regularly review access permissions, and establish logging and monitoring to track unauthorised use or data access. Consult international best practices for AI-related data privacy, because they are likely to strongly inform evolving compliance regulations and put their recommendations into practice. ... The best AI tools in the world won’t be much use if your finance teams avoid actually using them. Many employees are nervous that AI could take over their jobs and/or distrust the tech, which leads them to ignore AI-powered insights. Using AI tools effectively also requires digital literacy and technical skills that may be lacking among your employees.


Mind the Gap: Migration Projects – Gaining Traction or Spinning Your Wheels

Think about your migration project like running a marathon in rented shoes. (I know, I know. It’s not a photo-realistic example, but stick with me. You’ll get the point.) You start out with some good shoes, but they’re very expensive. Comfortable and well-fitting, but expensive. At, say, the 10-mile marker you have the opportunity to swap out your shoes. The ones you have are expensive and you don’t want to keep spending the money. Besides, you’re doing fine. So, you stop, select a less expensive pair, and put them on. All the while, the clock is ticking and you’re not making any progress toward the finish line. You’re betting on the expectation that you’ll make up the lost time by running the remainder of the race faster. The shoes are cheaper, but they don’t fit as well, and after a few miles your feet begin to hurt. Your pace slows considerably. You finish the race. Eventually. Far short of your goal, blood soaking through your socks, and far slower than had you not migrated. As you hobble back home with your disappointing result, you can console yourself with the money you saved as you try to convince yourself that it was worth it.



Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins

Daily Tech Digest - November 08, 2024

Improve Microservices With These New Load Balancing Strategies

Load balancing in a microservices setup is tricky yet crucial because it directly influences the system availability and performance level. To ensure that no single instance gets overloaded with user requests and to maintain operation even when one instance experiences issues, it is vital to distribute end-user requests among various service instances. This involves utilizing service discovery to pinpoint cases of dynamic load balancing to adjust to load changes and implementing fault-tolerant health checks for monitoring and redirecting traffic away from malfunctioned instances to maintain system stability. These tactics work together to guarantee a solid and efficient microservices setup. ... With distributed caching, intelligent load balancing, and event-driven system designs, microservices outperform today’s monolithic architectures in performance, scalability, and resilience qualities. The latter is much more efficient relative to the utilization of resources and response times since individual components can be scaled as needed. However, one must remember that the type of performance improvements introduced here means higher complexity. Implementation of the same is a complex process that needs to be monitored and optimized repeatedly. 


Achieving Net Zero: The Role Of Sustainable Design In Tech Sector

With an increasing focus on radical climate actions, environmentally responsible product design emerges as a vital tactic to achieving the net zero. According to the latest research more than two-thirds of organisations have reduced their carbon emissions as a result of the implementation of sustainable product design strategies. ... For businesses seeking to enhance sustainability it is essential to adopt a holistic approach. This means not only focusing on specific products but also examining the entire life cycle from design and packaging to end of life. It is crucial for all tech businesses to consider how sustainability can be maintained even after products and services have been purchased. Thus, enhancing product repairability is another key tactic to boost sustainability. Given that electronic waste contributes to 70% of all toxic waste and only about 12% of all e-waste is recycled properly right now, any action individual consumers can take to repair or recycle their old tech responsibility is a step toward a cleaner future. By integrating design features such as keyboard-free battery connectors and providing instructional repair videos, companies can make it easier for customers to repair their products, extending their lifespan and ultimately reducing waste.


How to Maximize DevOps Efficiency with Platform Engineering

Platform engineering can also go awry when the solutions an organization offers are difficult to deploy. In theory, deploying a solution should be as simple as clicking a button or deploying a script. But buggy deployment tools, as well as issues related to inconsistent software environments, might mean that DevOps engineers have to spend time debugging and fixing flawed platform engineering offerings — or ask the IT team to do it. In that case, a solution that was supposed to save time and simplify collaboration ends up doing the opposite. Along similar lines, platform engineering delivers little value when the solutions don't consistently align with the organization's governance and security policies. This tends to be an issue in cases where different teams implement different solutions and each team follows its own policies, instead of adhering to organization-wide rules. (It can also happen because the organization simply lacks clear and consistent security policies.) If the environments and toolchains that DevOps teams launch through platform engineering are insecure or inconsistently configured, they hamper collaboration and fail to streamline software delivery processes.


How banks can supercharge technology speed and productivity

Banks that want to increase technology productivity typically must change how engineering and business teams work together. Getting from an idea for a new customer feature to the start of coding has historically taken three to six months. First, business and product teams write a business case, secure funding, get leadership buy-in, and write requirements. Most engineers are fast at producing code once the requirements are clear, but when they must wait six months before they even write the first line, productivity stalls. Taking a page from digital-native companies, a number of top-performing banks have created joint teams of product managers and engineers. Each integrated team operates as a mini-business, with product managers functioning as mini-CEOs who help their teams work together toward quarterly objectives and key results (OKRs). With everyone collaborating in this manner, there is less need for time-consuming handoff tasks such as creating formal requirements and change requests. This way of working also unlocks greater product development speed and enables much greater responsiveness to customer needs. While most financial institutions already manage their digital and mobile teams in this product-centric way, many still use a traditional project-centric approach for the majority of their teams.


Choosing AI: the 7 categories cybersecurity decision-makers need to understand

As cybersecurity professionals, we want to avoid the missteps of the last era of digital innovation, in which large companies developed web architecture and product stacks that dramatically centralized the apparatus of function across most sectors of the global economy. The era of online platforms underwritten by just a few interlinked developer and technology infrastructure firms showed us that centralized innovation often restricts the potential for personalization for end users, which limits the benefits. ... It’s true that a CISO might want AI systems that reduce options and make their practice easier, so long as the outputs being used are trustworthy. But if the current state of development is sufficient that we should be wary of analytic products, it’s also enough for us to be downright distrustful of products that generate, extrapolate preferences, or find consensus. At present, these product styles are promising but entirely insufficient to mitigate the risks involved in adopting such unproven technology. By contrast, CISOs should think seriously about adopting AI systems that facilitate information exchange and understanding, and even about those that play a direct role in executing decisions. 


How GraphRAG Enhances LLM Accuracy and Powers Better Decision-Making

GraphRAG’s key benefit is its remarkable ability to improve LLMs’ accuracy and long-term reasoning capabilities. This is crucial because more accurate LLMs can automate increasingly complex and nuanced tasks and provide insights that fuel better decision-making. Additionally, higher-performing LLMs can be applied to a broader range of use cases, including those within sensitive industries that require a very high level of accuracy, such as healthcare and finance. That being said, human oversight is necessary as GraphRAG progresses. It’s vital that each answer or piece of information the technology produces is verifiable, and its reasoning can be traced back manually through the graph if necessary. In today’s world, success hinges on an enterprise’s ability to understand and properly leverage its data. But most organizations are swimming in hundreds of thousands of tables of data with little insight into what’s actually going on. This can lead to poor decision-making and technical debt if not addressed. Knowledge graphs are critical for helping enterprises make sense of their data, and when combined with RAG, the possibilities are endless. GraphRAG is propelling the next wave of generative AI, and organizations who understand this will be at the forefront of innovation.


Why Banks Should Rethink ‘Every Company is a Software Company’

Refocusing on core strengths can yield substantial benefits. For example, by enhancing customer experience through personalized financial advice, banks can deepen customer loyalty and foster long-term relationships. Improving risk assessment processes can lead to more accurate lending decisions and better management of financial exposures. Ensuring rigorous regulatory compliance is not only crucial for avoiding costly penalties but also for preserving a strong reputation in the market. Outsourcing software and AI development to specialized providers is a strategic opportunity that can offer significant benefits. By partnering with technology firms, banks can tap into cutting-edge advancements without bearing the heavy burden of developing and maintaining them themselves. ... AI is a powerful ally, enabling financial institutions to streamline operations, innovate faster, and stay ahead in an ever-evolving market. To achieve sustainable success, however, these institutions need to rethink their approach to software and AI investments. By focusing on core competencies and leveraging specialized providers for technological needs, these institutions can optimize their operations and achieve the results they’re looking for.


Steps Organizations Can Take to Improve Cyber Resilience

Protecting endpoints will become increasingly important as more internet-enabled devices – like laptops, smartphones, IoT hardware, tablets, etc. – hit the market. Endpoint protection is also essential for companies that embrace remote or hybrid work. By securing every possible endpoint, organizations address a common attack plane for cyberattackers. One of the fastest paths to endpoint protection is to invest in purpose-built solutions that go beyond basic antivirus software. To get ahead of cybersecurity threats, teams need real-time monitoring and threat detection capabilities. ... Cybersecurity teams should implement DNS filtering to prevent users from accessing websites that are known for hosting malicious activity. Technology solutions specifically designed for DNS filtering can also evaluate requests in real time between devices and websites before determining whether to allow the connection. Additionally, they can evaluate overall traffic patterns and user behaviors, helping IT leaders make more informed decisions about how to boost web security practices across the organization. ... Achieving cyber resilience is an ongoing process. The digital landscape changes constantly, and the best way to keep up is to make cybersecurity a focal point of everyday operations. 


The future of super apps: Decentralisation and security in a new digital ecosystem

Decentralised super apps could redefine public utility by providing essential services without private platform fees, making them accessible and affordable. This approach would serve the public interest by enabling fairer, community-driven access to essential services. For example, a decentralised grocery delivery service might allow local vendors to reach consumers without relying on platforms like Blinkit or Zepto, potentially lowering costs and supporting local businesses. As blockchain technology progresses, decentralised finance (DeFi) can also be integrated into super apps, allowing users to manage transactions securely and privately. ... Despite the potential, the path to decentralised super apps comes with challenges. Building a secure, decentralised platform requires sophisticated blockchain infrastructure, a high level of trust, and user education. Blockchain technology is still evolving, and decentralised applications (dApps) often face issues with scalability, user adoption, and regulatory scrutiny. For instance, certain countries have strict data privacy laws that could either facilitate or hinder the adoption of decentralised super apps depending on the regulatory stance towards blockchain.


Digital Transformation in Banking: Don't Let Technology Steal Your Brand

A clear, purpose-driven brand that communicates empathy, reliability, and transparency is essential to winning and retaining customer trust. Banks that invest in branding as part of their digital transformation connect with customers on a deeper level, creating bonds that withstand market fluctuations and competitive pressures. ... The focus on digital transformation has intensified competition among banks to adopt the latest technologies. While technology is essential for operational efficiency and customer convenience, it’s not the core of a bank’s identity. A bank’s brand is built on values like trust, reliability, and customer service—values that technology should reinforce, not replace. Banks need to keep a clear sight of their purpose: to serve customers’ financial well-being, empower their dreams, and create trust in every interaction. ... It’s tempting to jump on the latest tech trends to stay competitive, but each technological investment should reflect the bank’s brand values and serve customer needs. For instance, mobile banking apps, digital wallets, and AI-based financial planning tools all present opportunities to deepen brand connections.



Quote for the day:

“The final test of a leader is that he leaves behind him in other men the conviction and the will to carry on.” -- Walter Lippmann

Daily Tech Digest - November 07, 2024

Keep Learning or Keep Losing: There's No Finish Line

Traditional training and certifications are a starting point, but they're often not enough to prepare professionals for real-world challenges. Current research supports a need for cybersecurity education to be interactive, with practical approaches that deepen both engagement and understanding. ... For cybersecurity professionals, a commitment to lifelong learning is a career advantage. Those who prioritize continuous education stand out, not only because they keep pace with industry advancements but also because they demonstrate a proactive mindset valued by employers. Embracing lifelong learning positions professionals for growth, higher responsibility and leadership opportunities within their organizations. Organizations that foster a culture of continuous learning create an environment in which employees feel empowered and supported in their growth. These organizations often find they retain talent longer and perform better in crisis situations because their teams are both knowledgeable and resilient. By prioritizing ongoing education, companies can cultivate a workforce that's agile, engaged and better prepared to face cyberthreats head-on. In cybersecurity, the question isn't whether you'll keep learning - it's how you'll keep learning. 


Top 5 security mistakes software developers make

“A very common practice is the lack of or incorrect input validation,” Tanya Janca, who is writing her second book on application security and has consulted for many years on the topic, tells CSO. Snyk also has blogged about this, saying that developers need to “ensure accurate input validation and that the data is syntactically and semantically correct.” Stackhawk wrote, “always make sure that the backend input is validated and sanitized properly.” ... One aspect of lax authentication has to do with what is called “secrets sprawl,” the mistake of using hard-coded credentials in the code, including API and encryption keys and login passwords. Git Guardian tracks this issue and found that almost every breach exposing such secrets remained active for at least five days after the software’s author was notified. They found that a tenth of open-source authors leaked a secret, which amounts to bad behavior of about 1.7 million developers. ... But there is a second issue that goes to understanding security culture so you can make the right choices of tools that will actually get deployed by your developers. Jeevan Singh blogs about this issue, mentioning that you have to start small and not just go shopping for everything all at once, “so as not to overwhelm your engineering organization with huge lists of vulnerabilities. ..."


There is No Autonomous Network Without Observability

One of the best things about observability is how it strengthens network resilience. Downtime can not only damage your reputation and frustrate your customers; it is also flat-out expensive. Observability helps you spot vulnerabilities before they become major issues. With real-time insights, you can jump in and make fixes before they lead to downtime or degraded performance. Plus, observability works hand-in-hand with AI-driven assurance systems. By constantly monitoring performance, these systems diligently look for patterns that might hint at future problems. They can make proactive adjustments, which cut down on the need for manual intervention. The result? A network that is more self-reliant, adaptive, and able to keep running smoothly. Observability doesn’t just stop there—it also steps up your security game. With threat detection built into every layer of the network, observability helps your network identify and deal with security issues in real time, making it not just self-healing but self-securing. ... Today’s networks are not confined to one domain anymore. We are working with multi-domain networks that tie together radio, transport, and cloud technologies. That creates a massive amount of data, and managing that data in real time is a challenge. 


Building a better future: The enterprise architect’s role in leading organizational transformation

Architects bring unique capabilities that make them well-suited for leadership roles in an evolving business landscape. Their core strength lies in aligning technology with business goals. This keeps innovation and growth interconnected. Unlike traditional executives, architects have a holistic view of both domains, allowing them to see the big picture and drive meaningful change. With deep technical expertise, architects can navigate complex systems, platforms, and infrastructures. But their strategic thinking sets them apart—they don’t just focus on technology in isolation. They understand how it drives business value, enabling them to make informed decisions that benefit both the organization and its customers. Moreover, architects are natural collaborators. They excel at bridging gaps between different business units, fostering cross-functional teams, and ensuring integrated solutions that work for the entire organization. This ability to collaborate across departments makes them ideal for leadership in a world that values adaptability, inclusivity, and alignment over rigid command structures. The shift from a ‘command and control’ leadership mode to one of ‘align and collaborate’ is transforming how organizations are managed. 


How ‘Cheap Fakes’ Exploit Our Psychological Vulnerabilities

Cheap fakes exploit a range of psychological vulnerabilities, like fear, greed, and curiosity. These vulnerabilities make social engineering attacks prevalent across the board -- over two-thirds of data breaches involve a human element -- but cheap fakes are particularly effective at leveraging them. This is because many people are unable to identify manipulated media, particularly when it aligns with their preconceptions and existing biases. According to a study published in Science, false news spreads much faster than accurate information on social media. Researchers found several explanations for this phenomenon: false news tends to be more novel than the truth, and the stories elicited “fear, disgust, and surprise in replies.” Cheap fakes rely on these emotions to spread quickly and capture victims’ attention -- they create inflammatory imagery, aim to increase political and social division, and often present fragments of authentic content to produce the illusion of legitimacy. At a time when cheap fakes and deepfakes are rapidly proliferating, IT teams must emphasize a core principle of cybersecurity: Verify before you trust. Employees should be taught to doubt their initial reactions to digital content, particularly when that content is sensational, coercive, or divisive.... 


Cloud vs. On-Prem: Comparing Long-Term Costs

You’ve seen many reports of companies saving millions of dollars by moving a portion or majority of their workloads out of the cloud. When leaving the cloud becomes financially viable, the price point will depend on your workload, business requirements, and other factors, but here are some basic guidelines to consider. Big cloud providers have historically made moving all your data out of their cloud cost-prohibitive. Saving millions of dollars on computing will not make sense if it costs millions to move your data. ... You would have to reduce your cloud spend by 90-96% to save as much money as buying hardware. Reserved instances and spots may save money, but never that much. Budgeting hardware and collocation space will be easier to engineer and more predictable for your long-term projected spending. Spending this much money also means you are likely continuously upgrading based on your cloud provider’s upgrade requirements. You will frequently upgrade operating systems, database versions, Kubernetes clusters, and serverless runtimes. And you have no agency to delay them until it works best for your business. But saving people’s costs isn’t the only benefit. A frequent phrase when using the cloud is “opportunity cost.” 


Data Center Regulation Trends to Watch in 2025

Governments are increasingly focused on creating new or updated regulations to strengthen digital resiliency and cybersecurity because of the growing importance of IT in critical services, rising geopolitical tensions, explosion of cyberattacks and increased outsourcing to cloud, according to the Uptime Institute. EU’s DORA requires the finance industry to establish a risk management framework, which includes business continuity and disaster recovery plans that include data backup and recovery; incident reporting; digital operational resilience testing; information sharing of cyber threats with other financial institutions; and managing the risk of their third-party information and communications technology (ICT) providers, such as cloud providers. “You’ve got to make sure your data center is robust, resilient, and that it doesn’t go down. And if it does go down, you’re responsible for it,” said Rahiel Nasir, IDC’s associate research director of European Cloud and lead analyst of worldwide digital sovereignty. Financial businesses will have to ensure their third-party providers meet regulatory requirements by negotiating it into their contracts. As a result, both the finance sector and their service providers will need to implement the tools and procedures necessary to comply with DORA, an IDC report said.


How AI will shape the next generation of cyber threats

In essence, AI turns advanced attack strategies into point-and-click operations, removing the need for deep technical knowledge. Attackers won’t need to write custom code or conduct in-depth research to exploit vulnerabilities. Instead, AI systems will analyze target environments, find weaknesses and even adapt attack patterns in real time without requiring much input from the user. This shift greatly widens the pool of potential attackers. Organizations that have traditionally focused on defending against nation-state actors and professional hacker groups will now have to contend with a much broader range of threats. Eventually, AI will empower individuals with limited tech knowledge to execute attacks rivaling those of today’s most advanced adversaries. To stay ahead, defenders must match this acceleration with AI-powered defenses that can predict, detect and neutralize threats before they escalate. In this new environment, success will depend not just on reacting to attacks but on anticipating them. Organizations will need to adopt predictive AI capabilities that can evolve alongside the rapidly shifting threat landscape, staying one step ahead of attackers who now have unprecedented power at their fingertips.


Navigating Privacy and Ethics in the Military use of AI

The report articulates the importance of integrating data governance into the development and deployment of military AI systems, and stresses that as military AI becomes increasingly central to national defense, so too does the need for clear, ethical, and transparent practices surrounding the data used to train these systems. “Data plays a critical role in the training, testing, and use of artificial intelligence, including in the military domain,” the report says, emphasizing that “research and development for AI-enabled military solutions is proceeding at breakneck speed” and therefore “the important role data plays in shaping these technologies have implications and, at times, raises concerns.” The report says “these issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning),” and points out that “pathways and governance solutions to address these issues remain scarce and very much underexplored.” Afina and Sarah Grand-Clément said the risk of data breaches or unauthorized access to military data also is a critical concern. 


AI in Cybersecurity: Balancing Innovation with Risk

Generative AI has advanced to a point where it can produce unique, grammatically sound, and contextually relevant content. Cybercriminals utilise this technology to create convincing phishing emails, text messages, and other forms of communication that mimic legitimate interactions. Unlike traditional phishing attempts, which often exhibit suspicious language or grammatical errors, AI-generated content can evade detection and manipulate targets more effectively. Furthermore, AI can produce deepfake videos or audio recordings that convincingly impersonate trusted individuals, increasing the likelihood of successful scams. ... AI, particularly Machine Learning (ML) and deep learning, can be instrumental in detecting suspicious activities and identifying abnormal patterns in network traffic. AI can establish a baseline of normal behavior by analysing vast datasets, including traffic trends, application usage, browsing habits, and other network activity. This baseline can serve as a guide for spotting anomalies and potential threats. AI’s ability to process large volumes of data in real-time means it can flag suspicious activities faster and more accurately, enabling immediate remediation and minimising the chances of a successful cyberattack. 



Quote for the day:

“It’s better to look ahead and prepare, than to look back and regret.” -- Jackie Joyner Kersee

Daily Tech Digest - November 06, 2024

Enter the ‘Whisperverse’: How AI voice agents will guide us through our days

Within the next few years, an AI-powered voice will burrow into your ears and take up residence inside your head. It will do this by whispering guidance to you throughout your day, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot and prompting you with the name of a coworker you pass in the hall. It may even coach you as you hold conversations with friends and coworkers, or when out on dates, give you interesting things to say that make you seem smarter, funnier and more charming than you really are. ... Most of these devices will be deployed as AI-powered glasses because that form-factor gives the best vantage point for cameras to monitor our field of view, although camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world. Also, sensored glasses and earbuds will allow us to respond silently to our AI assistants with simple head nod gestures of agreement or rejection, as we naturally do with other people. ... On the other hand, deploying intelligent systems that whisper in your ears as you go about your life could easily be abused as a dangerous form of targeted influence.


How to Optimize Last-Mile Delivery in the Age of AI

Technology is at the heart of all advancements in last-mile delivery. For instance, a typical map application gives the longitude and latitude of a building — its location — and a central access point. That isn't enough data when it comes to deliveries. In addition to how much time it takes to drive or walk from point A to point B, it's also essential for a driver to understand what to do at point B. At an apartment complex, for example, they need to know what units are in each building and on which level, whether to use a front, back, or side entrance, how to navigate restricted or gated areas, and how to access parking and loading docks or package lockers. Before GenAI, third-party vendors usually acquired this data, sold it to companies, and applied it to map applications and routing algorithms to provide delivery estimates and instructions. Now, companies can use GenAI in-house to optimize routes and create solutions to delivery obstacles. Suppose the data surrounding an apartment complex is ambiguous or unclear. For instance, there may be conflicting delivery instructions — one transporter used a drop-off area, and another used a front door. Or perhaps one customer was satisfied with their delivery, but another parcel delivered to the same location was damaged or stolen. 


Cloud providers make bank with genAI while projects fail

Poor data quality is a central factor contributing to project failures. As companies venture into more complex AI applications, the demand for tailored, high-quality data sets has exposed deficiencies in existing enterprise data. Although most enterprises understood that their data could have been better, they haven’t known how bad. For years, enterprises have been kicking the data can down the road, unwilling to fix it, while technical debt gathered. AI requires excellent, accurate data that many enterprises don’t have—at least, not without putting in a great deal of work. This is why many enterprises are giving up on generative AI. The data problems are too expensive to fix, and many CIOs who know what’s good for their careers don’t want to take it on. The intricacies in labeling, cleaning, and updating data to maintain its relevance for training models have become increasingly challenging, underscoring another layer of complexity that organizations must navigate. ... The disparity between the potential and practicality of generative AI projects is leading to cautious optimism and reevaluations of AI strategies. This pushes organizations to carefully assess the foundational elements necessary for AI success, including robust data governance and strategic planning—all things that enterprises are considering too expensive and too risky to deploy just to make AI work.


Why cybersecurity needs a better model for handling OSS vulnerabilities

Identifying vulnerabilities and navigating vulnerability databases is of course only part of the dependency problem; the real work lies in remediating identified vulnerabilities impacting systems and software. Aside from general bandwidth challenges and competing priorities among development teams, vulnerability management also suffers from challenges around remediation, such as the real potential that implementing changes and updates can potentially impact functionality or cause business disruptions. ... Reachability analysis “offers a significant reduction in remediation costs because it lowers the number of remediation activities by an average of 90.5% (with a range of approximately 76–94%), making it by far the most valuable single noise-reduction strategy available,” according to the Endor report. While the security industry can beat the secure-by-design drum until they’re blue in the face and try to shame organizations into sufficiently prioritizing security, the reality is that our best bet is having organizations focus on risks that actually matter. ... In a world of competing interests, with organizations rightfully focused on business priorities such as speed to market, feature velocity, revenue and more, having developers quit wasting time and focus on the 2% of vulnerabilities that truly present risks to their organizations would be monumental.


The new calling of CIOs: Be the moral arbiter of change

Unfortunately, establishing a strategy for democratizing innovation through gen AI is far from straightforward. Many factors, including governance, security, ethics, and funding, are important, and it’s hard to establish ground rules. ... What’s clear is tech-led innovation is no longer the sole preserve of the IT department. Fifteen years ago, IT was often a solution searching for a problem. CIOs bought technology systems, and the rest of the business was expected to put them to good use. Today, CIOs and their teams speak with their peers about their key challenges and suggest potential solutions. But gen AI, like cloud computing before it, has also made it much easier for users to source digital solutions independently of the IT team. That high level of democratization doesn’t come without risks, and that’s where CIOs, as the guardians of enterprise technology, play a crucial role. IT leaders understand the pain points around governance, implementation, and security. Their awareness means responsibility for AI, and other emerging technologies have become part of a digital leader’s ever-widening role, says Rahul Todkar, head of data and AI at travel specialist Tripadvisor.


5 Strategies For Becoming A Purpose-Driven Leader

Purpose-driven leaders are fueled by more than sheer ambition; they are driven by a commitment to make a meaningful impact. They inspire those around them to pursue a shared purpose each day. This approach is especially powerful in today’s workforce, where 70% of employees say their sense of purpose is closely tied to their work, according to a recent report by McKinsey. Becoming a purpose-driven leader requires clarity, strategic foresight, and a commitment to values that go beyond the bottom line. ... Aligning your values with your leadership style and organizational goals is essential for authentic leadership. “Once you have a firm grasp of your personal values, you can align them with your leadership style and organizational goals. This alignment is crucial for maintaining authenticity and ensuring that your decisions reflect your deeper sense of purpose,” Blackburn explains. ... Purpose-driven leaders embody the values and behaviors they wish to see reflected in their teams. Whether through ethical decision-making, transparency, or resilience in the face of challenges, purpose-driven leaders set the tone for how others in the organization should act. By aligning words with actions, leaders build credibility and trust, which are the foundations of sustainable success.


Chaos Engineering: The key to building resilient systems for seamless operations

The underlying philosophy of Chaos Engineering is to encourage building systems that are resilient to failures. This means incorporating redundancy into system pathways, so that the failure of one path does not disrupt the entire service. Additionally, self-healing mechanisms can be developed such as automated systems that detect and respond to failures without the need for human intervention. These measures help ensure that systems can recover quickly from failures, reducing the likelihood of long-lasting disruptions. To effectively implement Chaos Engineering and avoid incidents like the payments outage, organisations can start by formulating hypotheses about potential system weaknesses and failure points. They can then design chaos experiments that safely simulate these failures in controlled environments. Tools such as Chaos Monkey, Gremlin, or Litmus can automate the process of failure injection and monitoring, enabling engineers to observe system behaviour in response to simulated disruptions. By collecting and analysing data from these experiments, organisations can learn from the failures and use these insights to improve system resilience. This process should be iterative, and organisations should continuously run new experiments and refine their systems based on the results.


Shifting left with telemetry pipelines: The future of data tiering at petabyte scale

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past. ... As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels based on its value and use case, enabling organizations to optimize both cost and performance.


A Transformative Journey: Powering the Future with Data, AI, and Collaboration

The advancements in industrial data platforms and contextualization have been nothing short of remarkable. By making sense of data from different systems—whether through 3D models, images, or engineering diagrams—Cognite is enabling companies to build a powerful industrial knowledge graph, which can be used by AI to solve complex problems faster and more effectively than ever before. This new era of human-centric AI is not about replacing humans but enhancing their capabilities, giving them the tools to make better decisions, faster. Without the buy in from the people who will be affected by any new innovation or technology the probability of success is unlikely. Engaging these individuals early on in the process to solve the issues they find challenging, mundane, or highly repetitive, is critical to driving adoption and creating internal champions to further catalyze adoption. In a fascinating case study shared by one of Cognite’s partners, we learned about the transformative potential of data and AI in the chemical manufacturing sector. A plant operator described how the implementation of mobile devices powered by Cognite’s platform has drastically improved operational efficiency. 


Four Steps to Balance Agility and Security in DevSecOps

Tools like OWASP ZAP and Burp Suite can be integrated into continuous integration/continuous delivery (CI/CD) pipelines to automate security testing. For example, LinkedIn uses Ansible to automate its infrastructure provisioning, which reduces deployment times by 75%. By automating security checks, LinkedIn ensures that its rapid delivery processes remain secure. Automating security not only enhances speed but also improves the overall quality of software by catching issues before they reach production. Automated tools can perform static code analysis, vulnerability scanning and penetration testing without disrupting the development cycle, helping teams deploy secure software faster. ... As organizations look to the future, artificial intelligence (AI) and machine learning (ML) will play a crucial role in enhancing both security and agility. AI-driven security tools can predict potential vulnerabilities, automate incident response and even self-heal systems without human intervention. This not only improves security but also reduces the time spent on manual security reviews. AI-powered tools can analyze massive amounts of data, identifying patterns and potential threats that human teams may overlook. This can reduce downtime and the risk of cyberattacks, ultimately allowing organizations to deploy faster and more securely.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss