Daily Tech Digest - August 28, 2024

Improving healthcare fraud prevention and patient trust with digital ID

Digital trust involves the use of secure and transparent technologies to protect patient data while enhancing communication and engagement. For example, digital consent forms and secure messaging platforms allow patients to communicate with their healthcare providers conveniently while ensuring that their data remains protected. Furthermore, integrating digital trust technology into healthcare systems can streamline administrative processes, reduce paperwork, and minimize the chances of errors, according to a blog post by Five Faces. This not only enhances operational efficiency but also improves the overall patient experience by reducing wait times and simplifying access to medical services. ... These smart cards, embedded with secure microchips, store vital patient information and health insurance details, enabling healthcare providers to access accurate and up-to-date information during consultations. The use of chip-based ID cards reduces the risk of identity theft and fraud, as these cards are difficult to duplicate and require secure authentication methods. This technology ensures that only authorized individuals can access patient information, thereby protecting sensitive data from unauthorized access.


A CEO's Take on AI in the Workforce

Those ignoring the AI transformation and not uptraining their skilled staff are not putting themselves in a position to make use of untapped data that can provide insights into other areas of opportunity for their business. Making minimal-to-no investments in emerging technology merely delays the inevitable and puts companies at a disadvantage at the hands of their competitors. Alternatively, being too aggressive with AI can lead to security vulnerabilities or critical talent loss. While AI integration is critical to accelerating business outputs, doing so without moderators, data safeguards, and regulators to keep organizations in line with data governance and compliance is actually exposing companies to security issues. ... AI should not replace people, but rather presents an opportunity to better utilize them. AI can help solve time-management and efficiency issues across organizations, allowing skilled people to focus on creative and strategic roles or projects that drive better business value. The role of AI should focus on automating time-consuming, repetitive, administrative tasks, thereby leaving individuals to be more calculated and intentional with their time.


The promise of open banking: How data sharing is changing financial services

The benefits of open banking are multifaceted. Customers gain greater control over their financial data, allowing them to securely share it with authorized providers. This empowers them to explore a wider range of customized financial products and services, ultimately promoting financial stability and well-being. Additionally, open banking fosters innovation within the industry, as Fintech companies leverage customer-consented data to develop cutting-edge solutions. The Account Aggregator (AA) framework, regulated by the Reserve Bank of India (RBI), is a cornerstone of open banking in India. AAs act as trusted intermediaries, allowing users to consolidate their financial data from various sources, including banks, mutual funds, and insurance companies, into a single platform. ... APIs empower platforms to aggregate FD offerings from a multitude of banks across India. This provides investors with a comprehensive view of available options, allowing them to compare interest rates, tenures, minimum deposit requirements, and other features within a single platform. This transparency empowers informed decision-making, enabling investors to select the FD that best aligns with their risk appetite and financial goals.


What are the realistic prospects for grid-independent AI data centers in the UK?

Already colo companies looking to develop in the UK are evaluating on-site gas engine power generation and CHP (combined heat and power). To date, UK CHP projects have been hampered by a lack of grid capacity. Microgrid developments are viewed as a solution to this. CHP and microgrids should also make data center developments more appealing for local government planning departments. ... Data center developments have hit front-line politics with Rachel Reeves, the new UK Labour government’s Chancellor of the Exchequer (Finance Minister) citing data center infrastructure and reform of planning law as critical to growing the country’s economy. Already some projects that were denied planning permission look likely to be reconsidered with reports that “Deputy Prime Minister Angela Rayner" had “recovered two planning appeals for data centers in Buckinghamshire and Hertfordshire (already)”. It seems clear that to have any realistic chance of meeting data center capacity demand for AI, cloud and other digital services will require on-site power generation in some form or other. 


Why Every IT Leader Needs a Team of Trusted Advisors

When seeking advisors, look for individuals with the time and willingness to join your kitchen cabinet, Kelley says. "Be mindful of their schedules and obligations, since they are doing you a favor," he notes. Additionally, if you're offering any perks, such as paid meals, travel reimbursement, or direct monetary payments, let them know upfront. Such bonuses are relatively rare, however. "More than likely, you’re talking about individual or small group phone calls or meetings." Above all, be honest and open with your team members. "Let them know what kind of help you need and the time frame you are working under," Kelley says. "If you've heard different or contradictory advice from other sources, bring it up and get their reaction," he recommends. Keep in mind that an advisory team is a two-way relationship. Kelley recommends personalizing each connection with an occasional handwritten note, book, lunch, or ticket to a concert or sporting event. On the other hand, if you decide to ignore their input or advice, you need to explain why, he suggests. Otherwise, they might conclude that being a team participant is a waste of time. Also be sure to help your team members whenever they need advice or support. 


Why CI and CD Need to Go Their Separate Ways

Continuous promotion is a concept designed to bridge the gap between CI and CD, addressing the limitations of traditional CI/CD pipelines when used with modern technologies like Kubernetes and GitOps. The idea is to insert an intermediary step that focuses on promotion of artifacts based on predefined rules and conditions. This approach allows more granular control over the deployment process, ensuring that artifacts are promoted only when they meet specific criteria, such as passing certain tests or receiving necessary approvals. By doing so, continuous promotion decouples the CI and CD processes, allowing each to focus on its core responsibilities without overextension. ... Introducing a systematic step between CI and CD ensures that only qualified artifacts progress through the pipeline, reducing the risk of faulty deployments. This approach allows the implementation of detailed rule sets, which can include criteria such as successful test completions, manual approvals or compliance checks. As a result, continuous promotion provides greater control over the deployment process, enabling teams to automate complex decision-making processes that would otherwise require manual intervention.


CIOs listen up: either plan to manage fast-changing certificates, or fade away

Even when organizations finally decide to set policies and standardize security for new deployments, mitigating the existing deployments is a huge effort, and in the modern stack, there’s no dedicated operations team, he says. That makes it more important for CIOs to take ownership of the problem, Cairns points out. “Especially in larger, more complex and global organizations, the magnitude of trying to push these things through the organization is often underestimated,” he says. “Some of that is having a good handle on the culture and how to address these things in terms of messaging, communications, enforcement of the right policies and practices, and making sure you’ve got the proper stakeholder buy-in at the various points in this process — a lot of governance aspects.” ... Many large organizations will soon need to revoke and reprovision TLS certificates at scale. One in five Fortune 1000 companies use Entrust as their certificate authority, and from November 1, 2024, Chrome will follow Firefox in no longer trusting TLS certificates from Entrust because of a pattern of compliance failures, which the CA argues were, ironically, sometimes caused by enterprise customers asking for more time to deal with revocation. 


Effortless Concurrency: Leveraging the Actor Model in Financial Transaction Systems

In a financial transaction system, the data flow for handling inbound payments involves multiple steps and checks to ensure compliance, security, and accuracy. However, potential failure points exist throughout this process, particularly when external systems impose restrictions or when the system must dynamically decide on the course of action based on real-time data. ... Implementing distributed locks is inherently more complex, often requiring external systems like ZooKeeper, Consul, Hazelcast, or Redis to manage the lock state across multiple nodes. These systems need to be highly available and consistent to prevent the distributed lock mechanism from becoming a single point of failure or a bottleneck. ... In this messaging based model, communication between different parts of the system occurs through messages. This approach enables asynchronous communication, decoupling components and enhancing flexibility and scalability. Messages are managed through queues and message brokers, which ensure orderly transmission and reception of messages. ... Ensuring message durability is crucial in financial transaction systems because it allows the system to replay a message if the processor fails to handle the command due to issues like external payment failures, storage failures, or network problems.


Hundreds of LLM Servers Expose Corporate, Health & Other Online Data

Flowise is a low-code tool for building all kinds of LLM applications. It's backed by Y Combinator, and sports tens of thousands of stars on GitHub. Whether it be a customer support bot or a tool for generating and extracting data for downstream programming and other tasks, the programs that developers build with Flowise tend to access and manage large quantities of data. It's no wonder, then, that the majority of Flowise servers are password-protected. ... Leaky vector databases are even more dangerous than leaky LLM builders, as they can be tampered with in such a way that does not alert the users of AI tools that rely on them. For example, instead of just stealing information from an exposed vector database, a hacker can delete or corrupt its data to manipulate its results. One could also plant malware within a vector database such that when an LLM program queries it, it ends up ingesting the malware. ... To mitigate the risk of exposed AI tooling, Deutsch recommends that organizations restrict access to the AI services they rely on, monitor and log the activity associated with those services, protect sensitive data trafficked by LLM apps, and always apply software updates where possible.


Generative AI vs. Traditional AI

Traditional AI, often referred to as “symbolic AI” or “rule-based AI,” emerged in the mid-20th century. It relies on predefined rules and logical reasoning to solve specific problems. These systems operate within a rigid framework of human-defined guidelines and are adept at tasks like data classification, anomaly detection, and decision-making processes based on historical data. In sharp contrast, generative AI is a more recent development that leverages advanced ML techniques to create new content. This form of AI does not follow predefined rules but learns patterns from vast datasets to generate novel outputs such as text, images, music, and even code. ... Traditional AI relies heavily on rule-based systems and predefined models to perform specific tasks. These systems operate within narrowly defined parameters, focusing on pattern recognition, classification, and regression through supervised learning techniques. Data fed into these models is typically structured and labeled, allowing for precise predictions or decisions based on historical patterns. In contrast, generative AI uses neural networks and advanced ML models to produce human-like content. This approach leverages unsupervised or semi-supervised learning techniques to understand underlying data distributions.



Quote for the day:

"Opportunities don't happen. You create them." -- Chris Grosser

Daily Tech Digest - August 27, 2024

Quantum computing attacks on cloud-based systems

Enterprises should indeed be concerned about the advancements in quantum computing. Quantum computers have the potential to break widely used encryption protocols, posing risks to financial data, intellectual property, personal information, and even national security. However, this reaction to danger goes well beyond NIST releasing quantum-resistant algorithms; it’s also crucial for enterprises to start transitioning today to new forms of encryption to future-proof their data security. As other technology advancements arise and enterprises run from one protection to another, work will begin to resemble Whac-A-Mole. I suspect many enterprises will be unable to whack that mole in time, will lose the battle, and be forced to absorb a breach. ... Although quantum computing represents a groundbreaking shift in computational capabilities, the way we address its challenges transcends this singular technology. It’s obvious we need a multidisciplinary approach to managing and leveraging all new advancements. Organizations must be able to anticipate technological disruptions like quantum computing and also become adaptable enough to implement solutions rapidly. 


QA and Testing: The New Cybersecurity Frontline

The convergence of Security, QA, and DevOps is pivotal in the evolution of software security. These teams, often interdependent, share the common objective of minimizing software defects. While security teams may not possess deep QA expertise and QA professionals might lack cybersecurity specialization, their collaborative efforts are essential for a lock-tight security approach. ... Automated testing tools can quickly identify common vulnerabilities and ensure that security measures are consistently applied across all code changes. Meanwhile, manual testing allows for more nuanced assessments, particularly in identifying complex issues that automated tools might miss. The best QA processes rely on both methods working in concert to ensure consistent and comprehensive testing coverage for all releases. While QA focuses on identifying and rectifying functional bugs, cybersecurity experts concentrate on vulnerabilities and weaknesses that could be exploited. By incorporating security testing, such as Mobile Application Security Testing (MAST), into the QA process, teams can proactively address security risks, recognize the importance of security, and prioritize threat prevention alongside quality improvements, enhancing the overall quality and reliability of the software.


Bridging the Divide: How to Foster the CIO-CFO Partnership

Considering today’s evolving business and regulatory landscape, such as the SEC Cybersecurity Ruling and internal focus on finance transformation, a strong CIO-CFO relationship is especially critical. For cybersecurity, the CIO historically focused on managing the organization's technological infrastructure and developing robust security measures, while the CFO concentrated on financial oversight and regulatory compliance. However, the SEC's ruling mandates the timely disclosure of material cybersecurity incidents, requiring a bridge between roles and the need for closer collaboration. The new regulation demands a seamless integration of the CIO’s expertise in identifying and assessing cyber threats with the CFO’s experience in understanding financial implications and regulatory requirements. This means cybersecurity is no longer seen as solely a technology issue but as a critical part of financial risk management and corporate governance. By working closely together, the CIO and CFO can create clear communication channels, shared responsibilities, and joint accountability for incident response and disclosure processes. 


Rising cloud costs leave CIOs seeking ways to cope

Cloud costs have risen for many of CGI’s customers in the past year. Sunrise Banks, which operates community banks and a fintech service, has also seen cloud costs increase recently, says CIO Jon Sandoval. The company is a recent convert to cloud computing; it replaced its own data centers with the cloud just over a year ago, he says. Cloud providers aren’t the only culprits, he says. “I’ve seen increases from all of our applications and services that that we procure, and a lot of that’s just dealing with the high levels of inflation that we’ve experienced over the past couple years,” he adds. “Labor, cost of goods — everything has gotten more expensive.” ... Cloud cost containment requires “assertive and sometimes aggressive” measures, adds Trude Van Horn, CIO and executive vice president at Rimini Street, an IT and security strategy consulting firm. Van Horn recommends that organizations name a cloud controller, whose job is to contain cloud costs. “The notion of a cloud controller requires a savvy and assertive individual — one who knows a lot about cloud usage and your particular cloud landscape and is responsible to monitor trends, look for overages, manage against the budget,” she says.


Zero-Touch Provisioning for Power Management Deployments

At the heart of ZTP lies Dynamic Host Configuration Protocol (DHCP), a foundational network protocol that assigns IP addresses to devices (clients) on a network, facilitating their communication within the network and with external systems. DHCP is an essential network protocol used in IP networks to dynamically assign IP addresses and other network configuration parameters to devices, thereby simplifying network administration. DHCP's capabilities extend beyond basic IP address assignment in providing various configuration details to devices via DHCP options. These options are instrumental in ZTP, allowing devices to automatically receive critical configuration information, including network settings, server addresses, and paths to configuration files. By utilizing DHCP options, devices can self-configure and integrate into the network seamlessly with "zero touch." With DHCP functionalities, ZTP can be utilized to automate the commissioning and configuration of critical power devices such as uninterruptible power systems (UPSs) and power distribution units (PDUs). Network interfaces can be leveraged in conjunction with ZTP for advanced connectivity and management features. 


Exclusive: Gigamon CEO highlights importance of deep observability

The importance of deep observability is heightened as companies undergo digital transformation, often moving workloads into virtualized environments or public clouds. This shift can increase risks related to compliance and security. Gigamon's deep observability helps CIOs move application workloads without compromising security. "You can maintain your security posture regardless of where the workload moves," Buckley said. "That's a really powerful capability for organizations today." Overall, the deep observability market grew 61 percent in 2023 and continued to expand as organizations increasingly embrace hybrid cloud infrastructure, with a forecasted CAGR of 40 percent and projected revenue of nearly $2B in 2028, according to research firm 650 Group. "CIOs are moving workloads to wherever it makes the organization more effective and efficient, whether that's public cloud, on-premises, or a hybrid approach," Buckley explained. "The key is to ensure there's no increased risk to the organization, and the security profile remains constant."


Prioritizing your developer experience roadmap

The biggest points of friction will be an ongoing process, but, as he said, “A lot of times, engineers have been at a place for long enough where they’ve developed workarounds or become used to problems. It’s become a known experience. So we have to look at their workflow to see what the pebbles are and then remove them.” Successful platform teams pair program with their customers regularly. It’s an effective way to build empathy. Another thing to prioritize is asking: Is this affecting just one or two really vocal teams or is it something systemic across the organization? ... Another way that platform engineering differs from the behemoth legacy platforms is that it’s not a giant one-off implementation. In fact, Team Topologies has the concept of Thinnest Viable Platform. You start with something small but sturdy that you can build your platform strategy on top of. For most companies, the biggest time-waster is finding things. Your first TVP is often either a directory of who owns what or better documentation. But don’t trust that instinct — ask first. Running a developer productivity survey will let you know what the biggest frustrations are for your developers. Ask targeted questions, not open-ended ones. 


How to prioritize data privacy in core customer-facing systems

Before creating a data-sharing agreement with a third party, review the organization’s data storage, collection and transfer safeguards. Verify that the organization’s data protection policies are as robust as yours. Further, when drafting an eventual agreement, ensure that contract terms dictate a superior level of protection, delineating the responsibilities and expectations of each party in terms of compliance and cybersecurity. Due diligence on the front half of a relationship is necessary. However, it’s also essential to maintain an open line of communication after the partnership commences. Organizations should regularly reassess their partners’ commitments to data privacy by inquiring about their ongoing data protection policies, including data storage timelines and the intent of using said data. ... Most customers can opt out of data collection and tracking at any time. This preference is known as “consent” — and enabling its collection is only half the journey. Organizations must also proactively enforce consent to ensure that downstream data routing doesn’t jeopardize or invalidate a customer’s express preferences.


Choosing a Data Quality Tool: What, Why, How

Data quality describes a characteristic or attribute of the data itself, but equally important for achieving and maintaining the quality of data is the ability to monitor and troubleshoot the systems and processes that affect data quality. Data observability is most important in complex, distributed data systems such as data lakes, data warehouses, and cloud data platforms. It allows companies to monitor and respond in real time to problems related to data flows and the data elements themselves. Data observability tools provide visibility into data as it traverses a network by tracking data lineages, dependencies, and transformations. The products send alerts when anomalies are detected, and apply metadata about data sources, schemas, and other attributes to provide a clearer understanding and more efficient management of data resources. ... A company’s data quality efforts are designed to achieve three core goals: Promote collaboration between IT and business departments; Allow IT staff to manage and troubleshoot all data pipelines and data systems, whether they’re completely internal or extend outside the organization; Help business managers manipulate the data in support of their work toward achieving business goals.


Businesses increasingly turn to biometrics for physical access and visitor management

Experts suggest that to address these concerns, employers need to be more transparent about their use of biometric technologies and implement robust safeguards to protect employees’ data. This includes informing employees about how their data will be used, stored, and protected from potential breaches. Employers should also offer alternatives for those who are uncomfortable with biometric systems to ensure no employee feels coerced. Companies that prioritize transparency, consent, and data protection are more likely to gain employee trust and avoid backlash. However, without clear guidelines and protections, resistance to workplace biometrics is likely to grow. “Education needs to be laid out very clearly and regularly that, ‘Look, biometrics is not an invasion of privacy,” adds Murad. “It’s providing an envelope of security for your privacy, it’s protecting it.’ I think that message is getting there, but it’s taking time.” Several companies have recently introduced new physical access security technologies. Nabla Works has launched advanced facial recognition tools with anti-spoofing features for secure access across various applications



Quote for the day:

"It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change." -- Charles Darwin

Daily Tech Digest - August 26, 2024

The definitive guide to data pipelines

A key data pipeline capability is to track data lineage, including methodologies and tools that expose data’s life cycle and help answer questions about who, when, where, why, and how data changes. Data pipelines transform data, which is part of the data lineage’s scope, and tracking data changes is crucial in regulated industries or when human safety is a consideration. ... Other data catalog, data governance, and AI governance platforms may also have data lineage capabilities. “Business and technical stakeholders must equally understand how data flows, transforms, and is used across sources with end-to-end lineage for deeper impact analysis, improved regulatory compliance, and more trusted analytics,” says Felix Van de Maele, CEO of Collibra. The data ops behind data pipelines When you deploy pipelines, how do you know whether they receive, transform, and send data accurately? Are data errors captured, and do single-record data issues halt the pipeline? Are the pipelines performing consistently, especially under heavy load? Are transformations idempotent, or are they streaming duplicate records when data sources have transmission errors?


Living with trust issues: The human side of zero trust architecture

As we’ve become more dependent on technology, IT environments have become more complex. This has made threats more intense and could even pose a serious danger. To tackle these growing security challenges — which needed a stronger and more flexible approach — industry experts, security practitioners, and tech providers came together to develop the zero trust architecture (ZTA) framework. This development led to a growing recognition of the importance of prioritizing verification over trust, which made ZTA a cornerstone of modern cybersecurity strategies. The main idea behind ZTA is to “never trust, always verify.” ... Implementing the ZTA framework means that every action the IT and security teams handle is filtered through a security-first lens. However, the over-repeated mantra of “never trust, always verify” may affect the psychological well-being of those implementing it. Imagine spending hours monitoring every network activity while constantly questioning if the information is genuine and if people’s motives are pure. This suspicious climate not only affects the work environment but also spills over into personal interactions, affecting trust with others. 


Top technologies that will disrupt business in 2025

Chaplin finds ML useful for identifying customer-related trends and predicting outcomes. That sort of forecasting can help allocate resources more effectively, he says, and engage customers better — for example when recommending products. “While gen AI undoubtedly has its allure, it’s important for business leaders to appreciate the broader and more versatile applications of traditional ML,” he says. ... What Skillington touches on is the often-overlooked facet of any successful digital transformation: It all starts with data. By breaking down data silos, establishing wholistic data governance strategies, developing the right data architecture for the business, and developing data literacy across disciplines, organizations can not only gain better access to their data but also better understand how ... Edge computing and 5G are two complementary technologies that are maturing, getting smaller, and delivering tangible business results securely, says Rogers Jeffrey Leo John, CTO and co-founder of DataChat. “Edge devices such as mobile phones can now run intensive tasks like AI and ML, which were once only possible in data centers,” he says. 


Meta presents Transfusion: A Recipe for Training a Multi-Modal Model Over Discrete and Continuous Data

Transfusion is trained on a balanced mixture of text and image data, with each modality being processed through its specific objective: next-token prediction for text and diffusion for images. The model’s architecture consists of a transformer with modality-specific components, where text is tokenized into discrete sequences and images are encoded as latent patches using a variational autoencoder (VAE). The model employs causal attention for text tokens and bidirectional attention for image patches, ensuring that both modalities are processed effectively. Training is conducted on a large-scale dataset consisting of 2 trillion tokens, including 1 trillion text tokens and 692 million images, each represented by a sequence of patch vectors. The use of U-Net down and up blocks for image encoding and decoding further enhances the model’s efficiency, particularly when compressing images into patches. Transfusion demonstrates superior performance across several benchmarks, particularly in tasks involving text-to-image and image-to-text generation. 


AI Assistants: Picking the Right Copilot

The best assistant operates as an agent that understands what context the underlying AI can assume from its known environment. IDE assistants such as GitHub Copilot know that they are responding with programming projects in mind. GitHub Copilot examines script comments as well as syntax in a given script before crafting a suggestion. The tool examines syntax and comments against its trained datasets, consisting of GPT training and the codebase of GitHub's public repositories. GitHub Copilot was trained on the public repositories in GitHub, so it has a slightly different "perspective" on syntax than that of ChatGPT ADA. Thus, the choice of corpus for an AI model can influence what answer an AI assistant yields to users. A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects.


Is the vulnerability disclosure process glitched? How CISOs are being left in the dark

The elephant in the room regarding misaligned motives and communications between researchers and software vendors is that vendors frequently try to hide or downplay the bugs that researchers feel obligated to make public. “The root cause is a deep-seated fear and prioritizing reputation over security of users and customers,” Rapid7’s Condon says. “What it comes down to many times is that organizations are afraid to publish vulnerability information because of what it might mean for them legally, reputationally, and financially if their customers leave. Without a concerted effort to normalize vulnerability disclosure to reward and incentivize well-coordinated vulnerability disclosure, we can pick at communication all we want. Still, the root cause is this fear and the conflict that it engenders between researchers and vendors.” Condon is, however, sympathetic to the vendors’ fears. “They don’t want any information out there because they are understandably concerned about reputational damage. They’re seeing major cyberattacks in the news, CISOs and CEOs dragged in front of Congress or the Senate here in the US, and lawsuits are coming out against them. ...”


Level Up Your Software Quality With Static Code Analysis

Behind high-quality software is high-quality code. The same core coding principles remain true regardless of how the code was written, either by humans or AI coding assistants. Code must be easy to read, maintain, understand and change. Code structure and consistency should be robust and secure to ensure the application performs well. Code devoid of issues helps you attain the most value from your software. ... While static analysis focuses on code quality and reduces the number of problems to be found later in the testing stage, application testing ensures that your software actually runs as it was designed. By incorporating both automated testing and static analysis, developers can manage code quality through every stage of the development process, quickly find and fix issues and improve the overall reliability of their software. A combination of both is vital to software development. In fact, a good static analysis tool can even be integrated into your testing tools to track and report the percentage of code covered by your unit tests. Sonar recommends a test code coverage of 80% or your code will fail to pass the recommended standard.


Two strategies to protect your business from the next large-scale tech failure

The key to mitigating another large-scale system failure is to plan for catastrophic events and practice your response. Make dealing with failure part of normal business practices. When failure is unexpected and rare, the processes to deal with it are untested and may even result in actions which make the failure worse. Build a network and a team that can adapt and react to failures. Remember when insurance companies ran their own data centres and disaster recovery tests were conducted twice a year? ... The second strategy for minimizing large-scale failures is to avoid the software monoculture created by the concentration of digital tech suppliers. It’s more complex but worth it. Some corporations have a policy of buying their core networking equipment from three or four different vendors. Yes, it makes day-to-day management a little more difficult, but they have the assurance that if one vendor has a failure, their entire network is not toast. Whether it’s tech or biology, a monoculture is extremely vulnerable to epidemics which can destroy the entire system. In the CrowdStrike scenario, if corporate networks had been a mix of Windows, Linux and other operating systems, the damage would not have been as widespread.


India's Critical Infrastructure Suffers Spike in Cyberattacks

The adoption of emerging technologies such as AI and cloud and the focus on innovation and remote working has driven digital transformations, thus boosting companies' need for more security defenses, according to Manu Dwivedi, partner and leader for cybersecurity at consultancy PwC India. "AI-enabled phishing and aggressive social engineering have elevated ransomware to the top concern," he says. "While cloud-related threats are concerning, greater interconnectivity between IT and OT environments and increased usage of open-source components in software are increasing the available threat surface for attackers to exploit." Indian organizations also need to harden their systems against insider threats, which requires a combination of business strategy, culture, training, and governance processes, Dwivedi says. ... The growing demand for AI has also shaped the threat landscape in the country and threat actors have already started experimenting with different AI models and techniques, says PwC India's Dwivedi. "Threat actors are expected to use AI to generate customized and polymorphic malware based on system exploits, which escapes detection from signature-based and traditional detection methods," he says.


Architectural Patterns for Enterprise Generative AI Apps

In the RAG pattern, we integrate a vector database that can store and index embeddings (numerical representations of digital content). We use various search algorithms like HNSW or IVF to retrieve the top k results, which are then used as the input context. The search is performed by converting the user's query into embeddings. The top k results are added to a well-constructed prompt, which guides the LLM on what to generate and the steps it should follow, as well as what context or data it should consider. ... GraphRAG is an advanced RAG approach that uses a graph database to retrieve information for specific tasks. Unlike traditional relational databases that store structured data in tables with rows and columns, graph databases use nodes, edges, and properties to represent and store data. This method provides a more intuitive and efficient way to model, view, and query complex systems. ... Like the basic RAG system, GraphRAG also uses a specialized database to store the knowledge data it generates with the help of an LLM. However, generating the knowledge graph is more costly compared to generating embeddings and storing them in a vector database. 



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - August - 25, 2024

Never summon a power you can’t control

Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence. As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien. Many people try to measure and even define AI using the metric of “human-level intelligence”, and there is a lively debate about when we can expect AI to reach it. This metric is deeply misleading. It is like defining and evaluating planes through the metric of “bird-level flight”. AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence. Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us – whether to give us a mortgage, to hire us for a job, to send us to prison. Meanwhile, generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch wate


Artificial Intelligence: To regulate or not is no longer the question

First, existing laws have been amended to support the use of AI, thereby enabling the economy to benefit from broader AI adoption. The Copyright Act 2021, for example, has been amended to clarify that copyrighted material may be used for machine learning provided that the model developer had lawful access to the data. Amendments to the Personal Data Protection Act (PDPA) 2012 enabled the re-use of personal data to support research and business improvement, after model development using anonymised data proved to be inadequate. Detecting fraud, preserving the integrity of systems and ensuring physical security of premises are also recognised as legitimate interests for using personal data in AI systems. Second, regulatory guidance has been issued on how existing regulations that protect consumers will also apply to AI systems. The Personal Data Protection Commission has issued a set of advisory guidelines on how the PDPA 2012 will apply at different stages of model development and deployment whenever personal data is used. It also clarifies the level of transparency expected from organisations deploying AI systems and how they may disclose relevant information to boost consumer trust and confidence. 


When You're Building The Future The Past Is No Longer A Guide

Artificial Intelligence (AI) definitely has its place. But when it comes to these specific industrial and manufacturing challenges, it tends to be fundamental engineering and physics that generate the answers – number crunching and data processing in the extreme. That, in turn, means that the engineers working to deliver more detailed test results, more realistic prototypes, and run ever more fine-grained simulations turn to some of the most powerful high-performance computing systems to power their workloads. What might have counted as a system capable of High Performance Computing (HPC) a decade, or even a few years ago, can quickly run out of steam. Computational fluid dynamics (CFD) applications often use thousands of CPU cores, points out Gardinalli. But it’s not purely a question of throwing raw power – and dollars – at the issue. The real conundrum is how to map to a wide range of different domains which all require different underlying infrastructure. Finite element analysis (FEA), for example, focuses on working out how materials and structures will act under stress. It’s therefore critical to public infrastructure as well as to vehicle design and crash simulation. 


Top companies ground Microsoft Copilot over data governance concerns

Asked how many had grounded a Copilot implementation, Berkowitz said it was about half of them. Companies, he said, were turning off Copilot software or severely restricting its use. "Now, it's not an unsolvable problem," he added. "But you've got to have clean data and you've got to have clean security in order to get these systems to really work the way you anticipate. It's more than just flipping the switch." While AI software also has specific security concerns, Berkowitz said the issues he was hearing about had more to do with internal employee access to information that shouldn't be available to them.  Asked whether the situation is similar to the IT security challenge 15 years ago when Google introduced its Search Appliance to index corporate documents and make them available to employees, Berkowitz said: "It's exactly that." Companies like Fast and Attivio, where Berkowitz once worked, were among those that solved the enterprise search security problem by tying file authorization rights to search results. So how can companies make Copilots and related AI software work? "The biggest thing is observability and not from a data quality viewpoint, but from a realization viewpoint," said Berkowitz. 


Five incorrect assumptions about ISO 27001

We wish there were such a thing as an impenetrable cyber barrier. Unfortunately, there isn’t—not even at the highest levels. For any IT system to be effective, information must be sent and received from external sources. These days, vast amounts of data get copied and transferred every second, moving around the world at lightspeed. As a result, there are always multiple potential access points for criminals to get in. ISO 27001 – and any good cybersecurity strategy – can’t offer 100% protection against cyber threats. However, they can significantly mitigate the risks associated with these attacks. A correctly applied ISMS will make you more likely to keep any malware or bad actors out. ... ISO 27001 isn’t a one-time thing. Unfortunately, nothing is in information security – or business in general. The initial implementation is the most time-consuming aspect and may require the most significant financial investment. But once it’s in place, there’s no time to sit back and relax. Your staff will immediately switch focus to using pre-agreed KPIs to analyse your ISMS’s effectiveness, suggesting and making strategic adjustments as relevant.


How we’re using ‘chaos engineering’ to make cloud computing less vulnerable to cyber attacks

Chaos engineering involves deliberately introducing faults into a system and then measuring the results. This technique helps to identify and address potential vulnerabilities and weaknesses in a system’s design, architecture, and operational practices. Methods can include shutting down a service, injecting latency (a time lag in the way a system responds to a command) and errors, simulating cyberattacks, terminating processes or tasks, or simulating a change in the environment in which the system is working and in the way it’s configured. n recent experiments, we introduced faults into live cloud-based systems to understand how they behave under stressful scenarios, such as attacks or faults. By gradually increasing the intensity of these “fault injections”, we determined the system’s maximum stress point. ... Chaos engineering is a great tool for enhancing the performance of software systems. However, to achieve what we describe as “antifragility” – systems that could get stronger rather than weaker under stress and chaos – we need to integrate chaos testing with other tools that transform systems to become stronger under attack.


Six pillars for AI success: how the C-suite can drive results

Many AI and GenAI solutions have common patterns and benefit from reusable assets that can accelerate time to value and reduce costs. Without a control tower, different groups across an enterprise are at risk of building very similar things from scratch for various use cases. The control tower effectively has authority over where an organization will make its investments and create value by identifying patterns across the various use cases that align with business needs and prioritizing the development of GenAI solutions, for example. ... The truly transformative impact would be to entirely reimagine what you do in the front office, not just streamline the back office. GenAI unlocks new products, services and business models that are easy to overlook if you approach the technology with a robotic process automation mindset. That can include creating new products and features enabled through GenAI, equipping them with connectivity under pay-as-you-go service subscription models, selling them directly to consumers instead of through intermediaries, and leveraging the consumer data for insights and perhaps selling it as a separate revenue stream. 


Cyber Hygiene: The Constant Defense Against Evolving B2B Threats

By partnering with companies that provide early warnings about threats and scams when they see them independently, such as domain spoofing attempts, businesses can stay ahead of potential threats. “That’s an important control, and I strongly recommend it for any company,” Kenneally said, stressing the benefits of collaborative working partnerships. “It’s about ensuring that the controls are in place and that we are partnering with our customers to mitigate risks,” he added. This is particularly relevant given the increasing sophistication of phishing attempts, some of which may be assisted by artificial intelligence. Another aspect of Boost’s strategy is fostering a culture of resilience and agility within the organization. This involves continuous training and education, not just for the IT team but across the entire company. “Training is critical,” Kenneally said. ... As the cybersecurity landscape continues to evolve, the need for companies to protect their digital perimeter becomes more pressing. But while the threats may change, the fundamental principles of good cybersecurity — vigilance, education and proactive planning — remain constant.


I’ve got the genAI blues

Why is this happening? I’m not an AI developer, but I pay close attention to the field and see at least two major reasons they’re beginning to fail. The first is the quality of the content used to create the major LLMs has never been that good. Many include material from such “quality” websites as Twitter, Reddit, and 4Chan. As Google’s AI Overview showed earlier this year, the results can be dreadful. As MIT Technology Review noted, it came up with such poor quality answers as “users [shoud] add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875.” Unless you glue rocks into your pizza, those are silly, harmless examples, but if you need the right answer, it’s another matter entirely. Take, for example, the lawyer whose legal paperwork included information from AI-made-up cases. The judges were not amused. If you want to sex chat with genAI tools, which appears to be one of the most popular uses for ChatGPT, accuracy probably doesn’t matter that much to you. Getting the right answers, though, is what matters to me and should matter to anyone who wants to use AI for business.

AI technology brings significant benefits to the Financial Services sector, including enhanced efficiency through automation, improved accuracy in risk assessments, personalised customer experiences via AI-driven insights and faster, more secure fraud detection. It also enables predictive analytics for better decision-making in areas like investment and lending. ... AI is there to support the employee – to elevate the human potential by delivering insights, knowledge and expedite results. However, challenges include the complexity of implementing AI systems, concerns around data privacy and security, regulatory compliance, and potential biases in AI models that can lead to unfair outcomes. Ensuring transparency and trust in AI decisions is also crucial for its broader acceptance in the sector. ... Trustworthy AI also ensures that compliance with regulations is maintained, risks are properly managed and ethical standards are upheld. In a sector where customer relationships are built on trust, any misstep could lead to reputational damage, financial loss, or regulatory penalties. 



Quote for the day:

“A dream doesn't become reality through magic; it takes sweat, determination, and hard work.” -- Colin Powell

Daily Tech Digest - August 24, 2024

India Nears Its Quantum Moment — Completion Of First Quantum Computer Expected Soon

Despite the progress, significant scientific challenges remain. Qubits are inherently unstable and susceptible to disturbances, leading to ‘decoherence’. Researchers worldwide are striving to overcome this through error-corrected qubits. “You have to show that by using such a system, you are actually solving some problem which is of relevance to industry or science or society and show that it is better, faster and cheaper,” Dr. Vijayaraghavan told India Today. “That of course will be the first holy grail of useful quantum computers. We are not there yet.” In Bengaluru, startup QpiAI is also venturing into quantum computing. Led by CEO and chairman Dr. Nagendra Nagaraja, the company is constructing a 25-qubit quantum computer, with plans to unveil it by the end of the year, according to the news service. With $6 million in funding, QpiAI intends to offer the platform to customers via cloud services and supply systems to top institutes and research groups across India. “Our vision is to integrate AI and quantum computing in enterprises,” Dr. Nagaraja told India Today


How Seeing Is Believing With Your Leadership Abilities

One of the standout points in my discussion with Cherches was his approach to communicating complex ideas across different functions within an organization. He stresses the importance of translating concepts into the "language" of the audience. Whether through analogies, stories, or visual diagrams, the goal is to make the abstract tangible. Cherches illustrates this by introducing an example. "We need to communicate in the language of our stakeholders. For example, I teach in the HR master's program at NYU, and I always emphasize that if you need funding for an HR initiative, you have to translate that into the language of money for the CFO. It's about finding the right visual and verbal tools to resonate with different audiences." This is where visual leadership shines—bridging gaps between different departments and creating a common language everyone can understand. In today's business environment, where cross-functional and asynchronous collaboration is critical, leaders who can translate their vision into visual terms are more likely to gain buy-in and drive initiatives forward.


5 things I wish I knew as a CFO before starting a digital transformation

One of our biggest missteps was not thoroughly defining what we intended to achieve from different perspectives — IT, employees, customers and the executive team. We knew having to use something new would have pain points, but we didn’t understand the impact of going from a customized environment to a more standard platform. The business didn’t understand the advantages either — their work might be slightly less efficient or different, but the processes would now be scalable, more stable and completely standardized across the different business units. ... In hindsight, we greatly underestimated the effort to cleanse and prepare our data for migration. Now that the project is well on its way, I always hear about the importance of data cleansing and preparation. But I never heard it from anyone upfront. We could have spent a year restructuring data hierarchies to align with the new system before even starting implementation. ... Not every part of the project will be a success or an upgrade. But there will be incredible success stories, efficiencies, new capabilities or insights. Often, they’re unexpected, like the impact that pricing changes had on our business, even though they weren’t in our original scope. 


Linus Torvalds talks AI, Rust adoption, and why the Linux kernel is 'the only thing that matters'

Torvalds said, "There is some stability with old kernels, and we do backport for patches and fixes to them, but some fixes get missed because people don't think they're important enough, and then it turns out they were important enough." Besides, if you stick with an old kernel for too long when you finally need to update to a newer one, it can be a massive pain to do so. So, "to all the Chinese embedded Linux vendors who are still using the Linux 4.9 kernel," Torvalds said, wagging his finger, "Stop." In addition, Hohndel said that when patching truly ancient kernels, the Linux kernel team can only say, "Sorry, we can't help you with that. It was so long ago that we don't even remember how to fix it." Switching to a more modern topic, the introduction of the Rust language into Linux, Torvalds is disappointed that its adoption isn't going faster. "I was expecting updates to be faster, but part of the problem is that old-time kernel developers are used to C and don't know Rust. They're not exactly excited about having to learn a new language that is, in some respects, very different. So there's been some pushback on Rust."


EU AI Act Tightens Grip on High-Risk AI Systems: Five Critical Questions for U.S. Companies

the EU AI Act applies to U.S. companies across the entire AI value chain that develop, use, import, or distribute AI Systems in the EU market. Further, a U.S. company is subject to the EU AI Act where it operates AI Systems that produce output used in the EU market. In other words, even if a U.S. company develops or uses a “High-Risk” AI System for job screening or online proctoring purposes, the EU AI Act still governs if outputs produced by such AI System are used in the EU for recruiting or admissions purposes. In another use case, if a U.S. auto OEM incorporates an AI system to support self-driving functionalities and distributes the vehicle under its own brand in the EU, such OEM is subject to the EU AI Act. ... In addition, for those AI systems classified as “High-Risk” under the “Specific Use Cases” in Annex III, they must also complete a conformity assessment to certify that such AI systems comply with the EU AI Act. Where AI Systems are themselves “Regulated Productsor related safety components,” the EU AI Act seeks to harmonize and streamline the processes to reduce market entrance costs and timelines. 


ServiceOps: Balancing Speed and Risk in DevOps

The integration between ITSM and AIOps tools automates identification of risky changes by analyzing risk information from the service history and operational data in a single pane of glass. AI models correlate past changes and determine their impact on operational variables such as service availability and health. This information decreases time spent on change requests by helping teams quickly understand the risk factors and the scope of impact by using powerful service dependency maps from AIOps tools. This AI-driven assessment also provides great feedback to DevOps and SRE teams, enabling them to deploy faster and with greater confidence. ... A conversational interface for change risk assessment can make risk insights understandable and actionable for teams tasked with delivering high-quality software rapidly. Imagine giving teams tasked with approving software changes access to a chat-based interface for asking questions and getting answers tailored to the specific environments where their software will be deployed. They could get answers to questions like, “What are the risky changes?” and “Can I look at change collisions?” The pace of change driven by DevOps presents significant challenges to IT service and IT operations teams. Both need to accelerate change without risking downtime. 


AI Assistants: Picking the Right Copilot

Not all assistants are meant for tech professionals. Others with a focus on consumer benefits are emerging. ... A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects. Professionals should also consider the frequency of the work in which the AI assistant is being applied. The frequency can indicate the degree of value being created — more frequency gives an AI assistant an opportunity to learn user preferences and past account history, which plays into its recommendations. The result is better productivity with AI, learning quickly where to best explore and experiment with crafting applications. Considering solution frequency can also reveal the cost of the technology against the value received. 


Researchers propose a smaller, more noise-tolerant quantum factoring circuit for cryptography

The MIT researchers found a clever way to compute exponents using a series of Fibonacci numbers that requires simple multiplication, which is reversible, rather than squaring. Their method needs just two quantum memory units to compute any exponent. "It is kind of like a ping-pong game, where we start with a number and then bounce back and forth, multiplying between two quantum memory registers," Vaikuntanathan adds. They also tackled the challenge of error correction. The circuits proposed by Shor and Regev require every quantum operation to be correct for their algorithm to work, Vaikuntanathan says. But error-free quantum gates would be infeasible on a real machine. They overcame this problem using a technique to filter out corrupt results and only process the right ones. The end-result is a circuit that is significantly more memory-efficient. Plus, their error correction technique would make the algorithm more practical to deploy. "The authors resolve the two most important bottlenecks in the earlier quantum factoring algorithm. Although still not immediately practical, their work brings quantum factoring algorithms closer to reality," adds Regev.


Power of communication in leadership transition

When change is on the horizon, the worst thing a leader can do is ignore or suppress employees' natural reactions. Uncertainty leads to rumours and speculation. Instead, leaders should create an environment of open communication, where teams feel comfortable voicing their concerns, asking questions, and sharing their thoughts on the new leader’s vision and the upcoming changes. Being honest and transparent is key to building trust. Open communication can help ease fears, address worries, and empower employees to embrace changes and contribute to the organisation’s success. It’s important to clearly explain what is happening, why it’s happening, and how it may affect different roles. Avoiding the temptation to sugar-coat negative news is also crucial. Listening is just as important as speaking. Leaders should avoid getting defensive or dismissive when employees share their concerns. ... To effectively reassure employees, leaders need to understand the root causes of their anxiety. Whether concerns are about job security, changes in responsibilities, or shifts in the company’s culture, employees need to know that their concerns are being heard and taken seriously.


What are the most in-demand skills in tech right now?

Martyn said that while there are many approaches to gain new skills, she advises learners to understand the areas where they have a natural aptitude and explore their preferred learning style. “With the right attitude and an understanding of their natural aptitude, I recommend reaching out for support to a leader or coach to support in the creation of a formal learning and development plan starting with some small learning objectives and building over time,” she said. “The technical, business and cognitive skills required for success will evolve over time but putting the right routines in place to consistently retrospect on your skill level, generate new ideas, identify opportunities for learning and execute a learning plan is a fundamental skill that will support continuous growth in the long term.” Pareek said that mastery of digital technologies such as AI and data analytics is becoming increasingly important both in specialist roles and more generally, so adaptability and resilience is key. “Building a robust professional network and engaging in collaboration can unlock new opportunities, while mentorship provides valuable guidance. ...”



Quote for the day:

"One of the sad truths about leadership is that, the higher up the ladder you travel, the less you know." -- Margaret Heffernan

Daily Tech Digest - August 23, 2024

Generative AI is sliding into the ‘trough of disillusionment’

“Even as AI continues to grab the attention, CIOs and other IT executives must also examine other emerging technologies with transformational potential for developers, security, and customer and employee experience and strategize how to exploit these technologies in line with their organizations’ ability to handle unproven technologies,” Chandrasekaran said. ... Autonomous AI software was among four emerging technologies called out in the report because it can operate with minimal human oversight, improve itself, and become effective at decision-making in complex environments. “These advanced AI systems that can perform any task a human can perform are beginning to move slowly from science fiction to reality,” Gartner said in its report. “These technologies include multiagent systems, large action models, machine customers, humanoid working robots, autonomous agents, and reinforcement learning.” Autonomous agents are currently heading up the slope to the peak of inflated expectations. Just ahead of autonomous agents on that slope is artificial general intelligence, currently a hypothetical form of AI where a machine learns and thinks like a human does.


As Fintechs Stumble, A New Breed of ‘TechFins’ Move to the Fore

TechFins have provided many points of value in recent years, but particularly in 2024 and in the near future, they will highly benefit financial institutions in the areas of: Leveraging the power of transaction data cleansing and analysis; Artificial intelligence (AI); Fraud prevention and cost mitigation; Extending the personalized user experience and reliability of the digital banking application; Transforming digital banking platforms into a digital sales and service platform; Increasing revenues and lowering costs for financial institutions. With financial institutions amassing high volumes of transaction data within their ecosystems, processing and analyzing that data is becoming a greater priority. According to the Pragmatic Institute, data practitioners spend 80% of their valuable time finding, cleaning, and organizing the data. This leaves only 20% of their time to actually perform analysis on it. This is the 80/20 rule, also known as the Pareto principle. TechFins can provide vital support to financial institutions’ data teams through transaction cleansing, leaving them more time to build campaigns and take action on the data. 


The Developer Crisis: Mental Health, Burnout, and Retention

Seven​​ out of 10 developers state that job satisfaction is the most important factor. Unplanned extra tasks and excessive overtime will have developers looking for the door. Businesses need to make it clear to both existing and new hires that they will do everything they can to respect these boundaries. Developers encounter constant roadblocks in their work, so time is precious. To help devs maintain a “flow state” (total focus on the task), businesses should consider re-evaluating their calendars to reduce unnecessary meetings. If not implemented, software development frameworks could help dev teams better organize their work and progress through projects faster. As with any operational change, feedback is critical. ... By freeing developers from burdensome backend duties, they can stay creative and focus on developing innovative new frontend solutions to improve a customer’s overall experience. This makes brilliant business sense, particularly in the case of e-commerce, where standard feature developments, which would otherwise take up tons of developer resources, can be handled much more efficiently by a tech platform.


Vulnerability prioritization is only the beginning

Scrutiny of cybersecurity processes and performance is ratcheting up due to the dual hammers of increased regulatory scrutiny and the brutal trend of highly damaging attacks. The US Securities and Exchange Commission, the European Union, the US Department of Defense, the British National Government, and the US Cybersecurity and Infrastructure Agency have all put or are putting in place significantly more stringent requirements for CISOs and their teams. Both the SEC and CISA have moved to push accountability to the Board of Directors and the C-Suite. This means that metrics alone are no longer sufficient for CISOs that want to provide full transparency. Process transparency has become just as critical to validate KPIs and allow auditors and the government to peer inside what were formerly security process “bottlenecks”. These bottlenecks are highly variable, human-centric processes, such as opening or closing a Jira ticket, back and forth commenting in a Slack thread, pushing a pull request on GitHub, or running a CI/CD pipeline to test and redeploy software after a patch. All can have human path dependencies, injecting uncertainty and variability.


Authentication and Authorization in Red Hat OpenShift and Microservices Architectures

Moving up the layers and looking at the blue layer (that is, interacting with OpenShift or Kubernetes in general) means communicating to the Kubernetes API server. This is true for both human and non-human users, whether they're using a GUI console or a terminal. Ultimately, all interaction with OpenShift or Kubernetes goes through the API server. The OAuth2/OIDC combination makes perfect sense for API authentication and authorization, so OpenShift features a built-in OAuth2 server. As part of the configuration of this OAuth2 server, an supported identity provider must be added. The identity provider helps the OAuth2 server confirm who the user is. Once this part has been configured, OpenShift is ready to authenticate users. For an authenticated user, OpenShift creates an access token and returns that token to the user. This token is called an OAuth access token. ... Users and Service Accounts can be organized into groups in OpenShift. Groups are useful when managing authorization policies to grant permissions to multiple users at once. For example, you can allow a group access to objects within a project instead of granting access to each user individually.


Bridging the digital divide: driving positive impact where it is needed most

There are definitely pros and cons to building rural fiber networks. On the one hand, by nature the construction process is more complex and expensive, but this typically means that there is little to no competition, leading to higher customer demand and lower overbuild risk. The challenges are even more acute in areas that qualify for Project Gigabit subsidies, with barriers including challenging terrain, geography, and geology, which often increases costs and extends timelines. Due to the distances being covered, rural rollouts often also require more permits and wayleaves from multiple landowners, further increasing complexity. Without subsidies, these projects would not be commercially viable, but with cost cover of between 60 percent to 80 percent of capex, a defensive position is created for contract winners, which increases returns for investors, while also supporting some of the most neglected rural communities. In these cases, network commercialization is also likely to be more achievable and we are starting to see a growing evidence base of strong customer cohort penetration in these projects which supports that thesis.


How IT Leaders Can Benefit From Active Listening

Active listening is crucial for effective leadership, says Justice Erolin, CTO at BairesDev, a technology services company. "It strengthens team dynamics, drives innovation, and ensures that all voices are heard," he observes in an email interview. When IT leaders speak, particularly with business stakeholders, they often err by assuming everyone understands the taxonomy and language being used, Chowning observes. "This is frequently the case with technology-related terminology that we understand well, but which business stakeholders might define or understand much differently," she adds. "If we start from unequal or disconnected positions, then we tend to hear something other than what the speaker intended." IT leaders can improve collaboration by understanding team members' perspectives and enhancing problem-solving with deeper insights, Erolin says. It can also help build trust by making team members feel heard. "Ultimately, leaders will be able to make better decisions through diverse viewpoints." Erolin notes that BairesDev incorporates active listening skills into its leadership training program, recognizing the tool's importance in fostering a culture of trust and collaboration.


Embracing Data and Emerging Technologies for Quality Management Excellence

Traditionally, quality management has been seen through a compliance lens – a necessary business cost to meet regulations. To unleash QM’s power as a catalyst for ongoing business growth and customer satisfaction, a fundamental mindset shift toward a more comprehensive, proactive approach is crucial. In the past, quality reporting and data tracking were reactive, addressing issues after they occurred. This fuels a fix-it-later cycle instead of prevention. The needed cultural change is from reactive to proactive QM. Forward-thinking firms use AI and predictive analytics to foresee problems before they arise, emphasizing prevention and continuous improvement. However, some companies remain regulation-focused due to deep-rooted challenges. Breaking this mold requires realigning toward customer-centricity, building robust systems that prioritize satisfaction and ongoing enhancement, with regulatory compliance as a natural outcome. It’s key to see quality and regulatory goals as aligned to each other and drivers of commercial growth, not conflicting with each other and inhibitors to commercial growth.


The reality of AI-centric coding

“Although AI is able to solve many college problem sets and handle small-to-medium snippets of code generation, it still struggles with complex logic, large code bases, and especially novel problems without precedent in the training data. Hallucinations and errors remain significant issues that require expert engineering oversight and correction,” Nag said. “These tools are far better at quick prototypes from scratch rather than iterating large applications, which is the bulk of engineering. Much of the context that drives large applications doesn’t actually exist in the code base at all.” Tom Taulli, who has authored multiple AI programming books, including this year’s AI-Assisted Programming: Better Planning, Coding, Testing, and Deployment, agreed that the move to great GenAI coding efforts will catch most enterprises off guard. "These tools will mean a change in traditional workflows, approaches, and mindset. Consider that they are pretrained, so they are often not updated for the latest frameworks and libraries. Another issue is the context window. Code bases can be massive. But even the most sophisticated LLMs cannot handle the huge amount of code files in the prompts,” Taulli said. 


Is AI Making Banking Safer or Just More Complicated?

The rise of AI in fraud detection has been a game changer. Through real-time analysis, machine learning and pattern recognition, AI tools can flag unusual transactions and often catch fraud before it occurs. AI's capabilities in anomaly detection allow financial institutions to be proactive, staying ahead of cybercriminals. But AI has its flaws. One of the most significant issues is the high rate of false positives. John MacInnes, a retired professor from Edinburgh, encountered this new reality firsthand. He tried to send 15,000 euros to a friend in Austria, expecting it to be a quick and routine transaction. The process became an ordeal involving the fraud team at Starling Bank. When MacInnes declined to provide personal messages and tax documents to prove the legitimacy of the payment, the bank took drastic action - it froze his account. It wasn't until media wrote about his plight that the bank admitted it went too far and unfroze the account. This incident sheds light on a growing challenge for banks: While caution is understandable, overly aggressive fraud prevention can alienate the very customers they aim to protect.



Quote for the day:

"Difficulties strengthen the mind, as labor does the body." -- Seneca