Daily Tech Digest - November 03, 2024

How AI-Powered Vertical SaaS Is Taking Over Traditional Enterprise SaaS

Enterprise decision-makers no longer care about the underlying technology itself—they care about what it delivers. They care about tangible outcomes like cost savings, operational efficiencies, and improved customer experiences. This shift in focus is causing companies to rethink their approach to enterprise software. ... Unlike traditional SaaS, which is built for broad use cases, vertical SaaS is deeply tailored to specific industries. By using AI, it can offer real-time insights, automation, and optimisations that solve problems unique to each sector. ... This hyper-targeted approach allows vertical SaaS to deliver tangible business outcomes rather than generic efficiencies. AI powers this shift by enabling platforms to adapt to industry-specific challenges, automate routine tasks, and provide insights at a scale and speed that was previously unattainable. Think of traditional SaaS like a Swiss Army knife — versatile, but not always the best tool for a specific task. vertical SaaS, however, is like a surgeon’s scalpel or a craftsman’s chisel — precisely designed for a specific job, delivering results with pinpoint accuracy and efficiency. What would you rather use for mission-critical work: a multi-tool that does everything adequately or an instrument built to perform one task perfectly?


Ending Microservices Chaos: How Architecture Governance Keeps Your Microservices on Track

With proper software architecture governance, you can reduce microservices complexity, ramp up developers faster, reduce MTTR, and improve the resiliency of your system, all while building a culture of intentionality. ... In addition to controlling the chaos of microservices with governance and observability, maintaining a high standard of security and code quality is essential. When working with distributed systems, the complexity of microservices — if left unchecked — can lead to vulnerabilities and technical debt. ... Tools from SonarSource — such as SonarLint or SonarQube — focus on continuous code quality and security. They help developers identify potential issues such as code smells, duplication, or even security risks like SQL injection. By integrating seamlessly with CI/CD pipelines, they ensure that every deployment follows strict security and code quality standards. The connection between code quality, application security, and architectural observability is clear. Poor code quality and unresolved vulnerabilities can lead to a fragile architecture that is prone to outages and security incidents. By proactively managing your code quality and security using these tools, you reduce the risk of microservices complexity spiraling out of control.


What is quiet leadership? Examples, traits & benefits

Quiet leadership is a leadership style defined by empathy, creativity, active listening, and attention to detail. It focuses on collaboration and communication instead of control. At its core is quiet confidence, not arrogance. Quiet leaders prefer to solve problems through teamwork and encouragement, not aggression. They are compassionate, understanding, open, and approachable. Most importantly, they earn their team’s respect instead of demanding it. ... Instead of criticizing yourself for not being an extroverted leader, embrace who you are. Don’t try to be someone you’re not. You might wonder if a quiet style can work because of leadership stereotypes. But in reality, it can be comforting to others. Build self-awareness and notice how you positively impact people. By accepting your unique leadership style, you’ll find what works best for you and your team. If you use your strengths, being a quiet leader can be a superpower. For example, quiet leaders are great listeners. Active listening is rare, so be proud if you have that skill. ... As a quiet leader, you’ll need to step outside your comfort zone at times. This can be exhausting, so make time to recharge and regain energy. 


From Code To Conscience: Humanities’ Role In Fintech’s Evolution

Reflecting on the day, it became clear that studying for a career in fintech—or any technology field—is not just about understanding mechanics; it’s about grasping the bigger picture and realizing the power of technology to serve people, not just profit. In a sector as influential as fintech, this balanced approach is crucial. A humanities background fosters exactly the kind of critical, thoughtful perspective that today’s technology fields demand. Combining technical knowledge with grounding in ethics, history, and critical problem-solving will be essential for tomorrow’s leaders, especially as fintech continues to shape societal norms and economic structures. The Pace of Fintech conference underscored how the intersection of AI, fintech, and the humanities is shaping a more thoughtful future for technology. Artificial intelligence, while transformative, requires a balance between innovation and ethics—an understanding of both its potential and its responsibilities. Humanities-trained thinkers bring crucial perspectives to this field, prompting questions about fairness, transparency, and societal impact that purely technical approaches may overlook.


Overcoming data inconsistency with a universal semantic layer

As if the data landscape weren’t complex enough, data architects began implementing semantic layers within data warehouses. Architects might think of the data assets they manage as the single source of truth for all use cases. However, that is not typically the case because millions of denormalized table structures are typically not “business-ready.” When semantic layers are embedded within various warehouses, data engineers must connect analytics use cases to data by designing and maintaining data pipelines with transforms that create “analytics-ready” data. ... What is needed is a universal semantic layer that defines all the metrics and metadata for all possible data experiences: visualization tools, customer-facing analytics, embedded analytics, and AI agents. With a universal semantic layer, everyone across the business agrees on a standard set of definitions for terms like “customer” and “lead,” as well as standard relationships among the data (standard business logic and definitions), so data teams can build one consistent semantic data model. A universal semantic layer sits on top of data warehouses, providing data semantics (context) to various data applications. It works seamlessly with transformation tools, allowing businesses to define metrics, prepare data models, and expose them to different BI and analytics tools.


Server accelerator architectures for a wide range of applications

The highest-performing architecture for AI performance is a system that allows the accelerators to communicate with each other without having to communicate back to the CPU. This type of system requires that the accelerators be mounted on their own baseboard with a high-speed switch on the baseboard itself. The initial communication that initializes the application that runs on the accelerators is over a PCIe path. When completed, the results are then also sent back to the CPU over PCIe. The CPU-to-accelerator communication should be limited, allowing the accelerators to communicate with each other over high-speed paths. A request from one accelerator is made directly or through a non-blocking switch (4 of them) and sent to the appropriate GPU. The performance of GPU to GPU is significantly higher than using the PCIe path, which allows for applications to use more than one GPU for an application without the need to interact with the CPU over the relatively slow PCIe lanes. ... A common and well-defined interface between CPUs and accelerators is to communicate over PCIe lanes. This architecture allows for various configurations in the server and the number of accelerators. 


AI Testing: More Coverage, Fewer Bugs, New Risks

The productivity gains from AI in testing are substantial. We now have a vast international bank that we have helped leverage our solution to such an extent it managed to increase test automation coverage across two of its websites (supporting around ten different languages), taking it from a mere forty percent to almost ninety percent in a matter of weeks. I believe this is an amazing achievement, not only because of the end results but also because working in an enterprise environment with its security and integrations can typically take forever. While traditional test automation might be limited to a single platform or language and the capacity of one person, AI-enhanced testing breaks these limitations. Testers can now create and execute tests on any platform (web, mobile, desktop), in multiple languages, and with the capacity of numerous testers. This amplifies testing capabilities and introduces a new level of flexibility and efficiency. ... Upskilling QA teams with AI brings the significant advantage of multilingual testing and 24/7 operation. In today’s global market, software products must often cater to diverse users, requiring testing in multiple languages. AI makes this possible without requiring testers to know each language, expanding the reach and usability of software products.


Why Great Leaders Embrace Broad Thinking — and How It Transforms Organizations

Broad thinking starts with employing three behaviors. First, spend time following your thoughts in an exploratory way rather than simply trying to find an answer or idea and moving on. Second, look at things from different angles and consider a wide range of options carefully before acting. Third, consistently consider the bigger picture and resist getting caught up in the smaller details. ... Companies want action. They don't want employees sitting around wringing their hands, frozen with indecision. They also don't want employees overanalyzing decisions to the point of inertia. Therefore, they often train employees to make decisions faster and more efficiently. However, decisions made for speed don't always make for great decisions. Especially seemingly simple ones that have larger downstream ramifications. ... Broad thinking considers the parts as being inseparable from the whole. The elephant parts are inseparable from the entire animal, just like the promotional campaign was inseparable from the other aspects of the organization it impacted. When you broaden your perspective, you also become more sensitive to subtleties of differentiation: how elements that are seemingly irrelevant, extraneous, or opposites can interconnect.


How Edge Computing Is Enhancing AI Solutions

Edge computing enhances the privacy and security of AI solutions by keeping sensitive data local rather than transmitting it to centralized cloud servers. Such an approach is most advantageous in industries such as managing and providing healthcare where privacy is of high value, especially in regards to patient information. By processing medical images or patient records at the edge, healthcare providers can ensure compliance with data protection regulations while still leveraging AI for improved diagnostics and treatment planning. Furthermore, edge AI minimizes the number of exposed data points that can be attacked through the networks by translating data tasks into localized subsets. ... As the volume of data generated by IoT devices continues to grow exponentially, transmitting all this information to the cloud for processing becomes increasingly impractical and expensive. This problem is solved in edge computing by sorting and analyzing data. This approach has dramatic effects in reducing the bandwidth required and the overall costs attached to it and in addition enhancing the system performance.


Why being in HR is getting tougher—and how to break through

The HR function lives in the friction between caring for the employee and caring for the organization. HR’s role is to represent the best interests of the organizations we work for and deliver care to employees for their end-to-end life cycle at those organizations. When you live in that friction, at times, you’re underdelivering that care to employees. At this moment—when employees’ needs are at an all-time high and organizations are struggling with costs and resetting around historical growth expectations—that gap is even wider than during less volatile times. There’s also an assumption that the employees’ interests and the company’s interests aren’t aligned—when many times they are. I have several tools to help people when they’re struggling. We can get a little bit caught up in the myths and expectations of people wanting too much, and that’s where the HR professional has to pull back and say, “This is what I can do, and it’s actually quite good.” ... Trust is hard earned but can go away in a second. And it can go away in a second because of HR but also, unfortunately, because of business leaders. 



Quote for the day:

"You can't be a leader if you can't influence others to act." -- Dale E. Zand

Daily Tech Digest - November 02, 2024

Cisco takes aim at developing quantum data center

On top of the quantum network fabric effort, Cisco is developing a software package that includes the best way for entanglement, distribution effort, protocol, and routing algorithms, which the company is building in a protocol stack and compiler, called Quantum Orchestra. “We are developing a network-aware quantum orchestrator, which is this general framework that takes quantum jobs in terms of quantum circuits as an input, as well as the network topology, which also includes how and where the different quantum devices are distributed inside the network,” said Hassan Shapourian, Technical Leader, Cisco Outshift. “The orchestrator will let us modify a circuit for better distributability. Also, we’re going to decide which logical [quantum variational circuit] QVC to assign to which quantum device and how it will communicate with which device inside a rack.” “After that we need to schedule a set of switch configurations to enable end-to-end entanglement generations [to ensure actual connectivity]. And that involves routing as well as resource management, because, we’re going to share resources, and eventually the goal is to minimize the execution time or minimize the switching events, and the output would be a set of instructions to the switches,” Shapourian said.


How CIOs Can Fix Data Governance For Generative AI

When you look at it from a consumption standpoint, the enrichment of AI happens as you start increasing the canvas of data it can pick up, because it learns more. That means it needs very clean information. It needs [to be] more accurate, because you push in something rough, it’s going to be all trash. Traditional AI ensured that we have started cleaning the data, and metadata told us if there is more data available. AI has started pushing people to create more metadata, classification, cleaner data, reduce duplicates, ensure that there is a synergy between the sets of the data, and they’re not redundant. It’s cleaner, it’s more current, it’s real-time. Gen AI has gone a step forward. If you want to contextually make it rich, you want to pull in more RAGs into these kinds of solutions, you need to know exactly where the data sits today. You need to know exactly what is in the data to create a RAG pipeline, which is clean enough for it to generate very accurate answers. Consumption is driving behavior. In multiple ways, it is actually driving organizations to start thinking about categorization, access controls, governance. [An AI platform] also needs to know the history of the data. All these things have started happening now to do this because this is very complex.


Here’s the paper no one read before declaring the demise of modern cryptography

With no original paper to reference, many news outlets searched the Chinese Journal of Computers for similar research and came up with this paper. It wasn’t published in September, as the news article reported, but it was written by the same researchers and referenced the “D-Wave Advantage”—a type of quantum computer sold by Canada-based D-Wave Quantum Systems—in the title. Some of the follow-on articles bought the misinformation hook, line, and sinker, repeating incorrectly that the fall of RSA was upon us. People got that idea because the May paper claimed to have used a D-Wave system to factor a 50-bit RSA integer. Other publications correctly debunked the claims in the South China Morning Post but mistakenly cited the May paper and noted the inconsistencies between what it claimed and what the news outlet reported. ... It reports using a D-Wave-enabled quantum annealer to find “integral distinguishers up to 9-rounds” in the encryption algorithms known as PRESENT, GIFT-64, and RECTANGLE. All three are symmetric encryption algorithms built on a SPN—short for substitution-permutation network structure.


AI Has Created a Paradox in Data Cleansing and Management

When asked about the practices required to maintain a cleansed data set, Perkins-Munn states that in that state, it is critical to think about enhancing data cleaning and quality management. Delving further, she states that there are many ways to maintain it over time and discusses a few that include AI algorithms revolving around automated data profiling and anomaly detection. Particularly in the case of unsupervised learning models, AI algorithms automatically profile data sets and detect anomalies or outliers. Continuous data monitoring is one ongoing way to keep data clean. She also mentions intelligent data matching and deduplication, wherein machine learning algorithms improve the accuracy and efficiency of data matching and duplication processes. Apart from those, there are fuzzy matching algorithms that can identify and merge duplicate records even with minimal variations or errors. Moving forward, Perkins-Munn states that for effective data management, organizations must prioritize where to start with data cleansing, and there is no one-method-fits-all approach to it. She advises to focus on cleaning the data that directly impacts the most critical business process or decision, thus ensuring quick, tangible value.


A brief summary of language model finetuning

For language models, there are two primary goals that a practitioner will have when performing fine tuning: Knowledge injection: Teach the model how to leverage new sources of knowledge (not present during pretraining) when solving problems. Alignment (or style/format specification): Modify the way in which the language model surfaces its existing knowledge base; e.g., abide by a certain answer format, use a new style/tone of voice, avoid outputting incorrect information, and more. Given this information, we might wonder: Which fine-tuning techniques should we use to accomplish either (or both) of these goals? To answer this question, we need to take a much deeper look at recent research on the topic of fine tuning. ... We don’t need tons of data to learn the style or format of output, only to learn new knowledge. When performing fine tuning, it’s very important that we know which goal—either alignment or knowledge injection—that we are aiming for. Then, we should put benchmarks in place that allow us to accurately and comprehensively assess whether that goal was accomplished or not. Imitation models failed to do this, which led to a bunch of misleading claims/results!  
 

Bridging Tech and Policy: Insights on Privacy and AI from IndiaFOSS 2024

Global communication systems are predominantly managed and governed by major technology corporations, often referred to as Big Tech. These organizations exert significant influence over how information flows across the world, yet they lack a nuanced understanding of the socio-political dynamics in the Global South. Pratik Sinha, co-founder at Alt News, spoke about how this gap in understanding can have severe consequences, particularly when it comes to issues such as misinformation, hate speech, and the spread of harmful content. ... The FOSS community is uniquely positioned to address these challenges by collaboratively developing communication systems tailored to the specific needs of various regions. Pratik suggested that by leveraging open-source principles, the FOSS community can create platforms (such as Mastodon) that empower users, enhance local governance, and foster a culture of shared responsibility in content moderation. In doing so, they can provide viable alternatives to Big Tech, ensuring that communication systems serve the diverse needs of communities rather than being controlled by a handful of corporations with a limited understanding of local complexities.


Revealing causal links in complex systems: New algorithm reveals hidden influences

In their new approach, the engineers took a page from information theory—the science of how messages are communicated through a network, based on a theory formulated by the late MIT professor emeritus Claude Shannon. The team developed an algorithm to evaluate any complex system of variables as a messaging network. "We treat the system as a network, and variables transfer information to each other in a way that can be measured," Lozano-Durán explains. "If one variable is sending messages to another, that implies it must have some influence. That's the idea of using information propagation to measure causality." The new algorithm evaluates multiple variables simultaneously, rather than taking on one pair of variables at a time, as other methods do. The algorithm defines information as the likelihood that a change in one variable will also see a change in another. This likelihood—and therefore, the information that is exchanged between variables—can get stronger or weaker as the algorithm evaluates more data of the system over time. In the end, the method generates a map of causality that shows which variables in the network are strongly linked. 


Proactive Preparation: Learning From Crowdstrike Chaos

You can’t plan for every scenario. However, having contingency plans can significantly minimise disruption if worse case scenarios occur. Clear guidance, such as knowing who to speak to about the situation and when during outages, can help financial organisations quickly identify faults in their supply chains and restore services. ... Contractual obligations with software suppliers provide an added layer of protection if issues arise. These ensure that there’s a legally binding agreement in place to ensure suppliers handle the issue effectively. Escrow agreements are also key. They protect the critical source code behind applications by keeping a current copy in escrow and can help organisations manage risk if a supplier can no longer provide software or updates. ... supply chains are complex. Software providers also rely on their own suppliers, creating an interconnected web of dependencies. Organisations in the sector should understand their suppliers’ contingency plans to handle disruptions in their wider supply chain. Knowing these plans provides peace of mind that suppliers are also prepared for disruptions and have effective steps in place to minimise any impact.


AI Drives Major Gains for Big 3 Cloud Giants

"Over the last four quarters, the market has grown by almost $16 billion, while over the previous four quarters the respective figure was $10 billion," John Dinsdale, chief analyst at Synergy Research Group, wrote in a statement. "Given the already massive size of the market, we are seeing an impressive surge in growth." ... The Azure OpenAI Service emerged as a particular bright spot, with usage more than doubling over the past six months. AI-based cloud services overall are helping Microsoft's cloud business. ... According to Pichai, Google Cloud's success is focused around five strategic areas. First, its AI infrastructure demonstrated leading performance through advances in storage, compute, and software. Second, the enterprise AI platform, Vertex, showed remarkable growth, with Gemini API calls increasing nearly 14 times over a six-month period. ... Looking ahead, AWS plans increased capital expenditure to support AI growth. "It is a really unusually large, maybe once-in-a-lifetime type of opportunity," Jassy said about the potential of generative AI. "I think our customers, the business, and our shareholders will feel good about this long term that we're aggressively pursuing it."


GreyNoise: AI’s Central Role in Detecting Security Flaws in IoT Devices

GreyNoise’s Sift is powered by large language models (LLMs) that are trained on a massive amount of internet traffic – including which targets targeting IoT devices – that can identify anomalies in the traffic that traditional system could miss, they wrote. They said Sift can spot new anomalies and threats that haven’t been identified or don’t fit the known signatures of known threats. The honeypot analyzes real-time traffic and uses the vendor’s proprietary datasets and then runs the data through AI systems to separate routine internet activity from possible threats, which whittles down what human researchers need to focus on and delivers faster and more accurate results. ... The discovery of the vulnerabilities highlights the larger security issues for an IoT environment that number 18 billion devices worldwide this year and could grow to 32.1 billion by 2030. “Industrial and critical infrastructure sectors rely on these devices for operational efficiency and real-time monitoring,” the GreyNoise researchers wrote. “However, the sheer volume of data generated makes it challenging for traditional tools to discern genuine threats from routine network traffic, leaving systems vulnerable to sophisticated attacks.”



Quote for the day:

"If you're not confused, you're not paying attention." -- Tom Peters

Daily Tech Digest - November 01, 2024

How CISOs can turn around low-performing cyber pros

When facing difficulties in both their professional and personal lives, people can start to withdraw and be less interested in contributing, even doing the bare minimum. They might also make mistakes more often or miss deadlines, or they can care less about how their colleagues or managers perceive their work. Body language can also provide insight into an employee’s emotional state and engagement level. When assigning tasks, Michelle Duval, founder and CEO at Marlee, a collaboration and performance AI for the workplace, looks her colleagues in the eyes. “Avoiding eye contact or visible sighing… are helpful clues,” she says. ... When it comes to helping employees improve their performance, the key point is to understand why they have problems in the first place and act quickly. “The best coaching depends on what type of problem you’re fixing,” says Caroline Ceniza-Levine, executive recruiter and career coach. “If the employee’s work product is suffering, they may need more direction or skills training. If the employee is disengaged, they may need help getting motivated – in this case, giving them more information around why their work matters and how important their contribution is may help.”


AI in Finserv: Predictive Analytics to Inclusive Banking

AI’s ability to synthesise vast amounts of data allows organisations to connect data from previously disparate sources, and then analyse it to detect historical patterns and deliver forward-looking insights. In the banking industry, this is happening at both a high level through traditional data analysis, and, increasingly, through more advanced AI tools including Natural Language Processing (NLP) and Machine Learning (ML). As organisations continue gathering these predictive analytics, many are also in the process of providing feedback to their AI systems which will ultimately improve their predictive accuracy over time. The main use case in which banks are currently seeing the biggest impact from AI-powered predictive insights is in forecasting consumer behaviour. ... AI-powered fraud detection algorithms can analyse vast amounts of transaction data in real-time at a scale that’s unattainable by humans. The real-time nature of these systems also allows organisations to prevent loss by intercepting anomalous transactions before they’re settled. This scalable, automatic approach also makes it easier for financial organisations to stay in compliance with relevant anti-money laundering (AML) and anti-terrorist financing regulations and avoid steep penalties.


Critical Software Must Drop C/C++ by 2026 or Face Risk

The federal government is heightening its warnings about dangerous software development practices, with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issuing stark warnings about basic security failures that continue to plague critical infrastructure. ... The report also states that the memory safety roadmap should outline the manufacturer’s prioritized approach to eliminating memory safety vulnerabilities in priority code components. “Manufacturers should demonstrate that the memory safety roadmap will lead to a significant, prioritized reduction of memory safety vulnerabilities in the manufacturer’s products and demonstrate they are making a reasonable effort to follow the memory safety roadmap,” the report said. “There are two good reasons why businesses continue to maintain COBOL and Fortran code at scale. Cost and risk,” Shimmin told The New Stack. “It’s simply not financially possible to port millions of lines of code, nor is it a risk any responsible organization would take.” ... Finally, it is good that CISA is recommending that companies with critical software in their care should create a stated plan of attack by early 2026, Shimmin said.


Into the Wild: Using Public Data for Cyber Risk Hunting

Threat hunting, on the contrary, is a proactive approach. It means that cyber teams go out into the wild and proactively identify potential risks and threat patterns, isolating them before they can cause any harm. A threat-hunting team requires specific knowledge and skills. Therefore, it usually consists of various professionals, such as threat analysts, who analyze available data to understand and predict the attacker's behavior; incident responders, who are ready to reduce the impact of a security incident; and cybersecurity engineers, responsible for building a secure network solution capable of protecting the network from advanced threats. These teams are trained to understand their company's IT environment, gather and analyze relevant data, and identify potential threats. Moreover, they have a clear risk escalation and communication process, which helps effectively react to threats and mitigate risks. Specialists often use a combination of tools that help in threat hunting. ... Endpoint detection and response (EDR) systems combine continuous real-time monitoring and collection of end-point data with a rule-based automated response.


How to Keep IT Up and Running During a Disaster

Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance. ... In disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster. “There are certain aspects of [disaster response] that need to be captured,” Miller says. “A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.” Being aware of deadlines for compliance reporting and being in contact with regulators if they might be missed can save money on potential fines and penalties. And notifying emergency response agencies may result in prioritization of assistance given the economic imperatives of IT continuity.


Breaking Down Data Silos With Real-Time Streaming

Traditional "extract, transform, load" and "extract, load, transform" data pipelines have historically been the primary method for moving data into analytics. But analytics consumers have often had limited control or influence over the source data model, which is typically defined by application developers in the operational domain. Data is also often stale and outdated by the time it arrives for processing. "By shifting data processing and governance, organizations can eliminate redundant pipelines, reduce the risk and impact of bad data at its source, and leverage high-quality, continuously up-to-date data assets for both operational and analytical purposes," LaForest said. Real-time data streaming is especially crucial in sectors such as finance, e-commerce and logistics, where even a few seconds of delay can negatively impact customer satisfaction and profitability. ... Real-time data streaming is emerging as the foundation for the next wave of AI innovation. For predictive AI and pattern recognition, data needs to be available in real time to drive accurate, immediate insights. Real-time data pipelines are essential for enabling AI systems to deliver smarter, faster insights and drive more accurate decision-making across the enterprise.


Is now the right time to invest in implementing agentic AI?

What makes agentic AI autonomous or able to take actions independently is its ability to interpret data, predict outcomes, and make decisions, learning from new data — unlike traditional RPA, which falters when encountering unexpected data, said Cameron Marsh, senior analyst at Nucleus research. This adaptive nature of agentic AI, according to Chada, can help enterprises increase efficiency by handling complex, variable tasks that traditional RPA can’t manage, such as the roles of a claims adjuster, a loan officer, or a case worker, provided that it has access to the necessary data, workflows, and tools required to complete the task. ... Some platform vendors are already offering low-code and no-code agent development and management platforms, but these are limited in their functionality to building simple agents or modifying templates for agents built by the vendors themselves, analysts said. “Creating more complex agents, specifically ones that require customized integrations and nuanced decision-making abilities still demands some technical understanding of data flows, machine learning model tuning, and API integrations,” Futurum’s Hinchcliffe said, adding that there is a learning curve on these platforms and that the migration journey could be resource intensive.


How open-source MDM solutions simplify cross-platform device management

Few MDM solutions effectively address the challenge of device diversity, as most are designed to manage specific hardware or software platforms. This limitation forces businesses to juggle multiple solutions to cover their entire device ecosystem. Open-source MDM solutions, however, offer flexible, modular architectures that adapt to various operating systems and device types. Open standards and extensible APIs ensure cross-platform compatibility, from mobile devices to servers to IoT endpoints. Unified management interfaces abstract platform complexities, providing consistent administration across diverse devices, while collaboration with open-source communities broadens device support. These approaches simplify management for IT teams in heterogeneous environments, reducing the need for multiple specialized solutions. ... An effective MDM solution enhances device management in remote locations by enabling developers and administrators to create lightweight agents for low-bandwidth environments and implement platform-agnostic policies for diverse ecosystems. With custom scripts and modular components, businesses can tailor management workflows to align with specific operational demands, ensuring seamless integration across various environments. 


4 Essential Strategies for Enhancing Your Application Security Posture

Whatever the cause, the torrent of false positives wastes time, lowers security team morale, and obscures real threats. As a result, risks of a major oversight increase, and response time to actual threats slows, leading to undetected breaches, data loss, financial damage, and erosion of customer trust. ... To successfully implement shifting left, AppSec must deliver solutions that eliminate the burden of manual security tasks. The ASPM strategy is to integrate tools directly into the development environment to make security checks a seamless part of the development workflow. Such integrations would provide real-time feedback and actionable security guidance, minimizing disruptions and significantly enhancing productivity. ... One of the biggest challenges in AppSec today is tool sprawl. The wide array of tools promising to plug different security gaps burdens security teams with a complex security ecosystem that locks critical data into tool-specific silos. This data fragmentation makes it impossible for security teams to gain a holistic view of the security environment, leading to confusion and missed vulnerabilities when insights from one tool don’t correlate with insights from another.


How a classical computer beat a quantum computer at its own game

Confinement is a phenomenon that can arise under special circumstances in closed quantum systems and is analogous to the quark confinement known in particle physics. To understand confinement, let's begin with some quantum basics. On quantum scales, an individual magnet can be oriented up or down, or it can be in a "superposition"—a quantum state in which it points both up and down simultaneously. How up or down the magnet is affects how much energy it has when it's in a magnetic field. ... Serendipitously, IBM had, in their initial test, set up a problem where the organization of the magnets in a closed two-dimensional array led to confinement. Tindall and Sels realized that since the confinement of the system reduced the amount of entanglement, it kept the problem simple enough to be described by classical methods. Using simulations and mathematical calculations, Tindall and Sels came up with a simple, accurate mathematical model that describes this behavior. "One of the big open questions in quantum physics is understanding when entanglement grows rapidly and when it doesn't," Tindall says. 



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous

Daily Tech Digest - October 28, 2024

Generative AI isn’t coming for you — your reluctance to adopt it is

Faced with a growing to-do list and the new balancing act of returning from maternity leave to an expanded role leading public relations for a publicly-traded tech company, I opened Jasper AI. I admittedly smirked at some of the functionality. Changing the tone? Is this AI emotionally intelligent? Maybe more so than some former colleagues. I began on a blank screen. I started writing a few lines and asked the AI to complete the piece for me. I reveled in the schadenfreude of its failure. It summarized what I had written at the top of the document and just spit it out below. Ha! I had proven my superiority. I went back into my cave, denying myself and my organization the benefits of this transformative technology. The next time I used gen AI, something in me changed. I realized how much prompting matters. You can’t just type a few initial sentences and expect the AI to understand what you want. It still can’t read our minds (I think). But there are dozens of templates that the AI understands. For PR professionals, there are templates for press releases, media pitches, crisis communications statements, press kits and more.


What's Preventing CIOs From Achieving Their AI Goals?

"While no CIO wants to be left behind, they are also prudent about their AI adoption journeys and how they implement the technology for business in a responsible manner," said Dr. Jai Ganesh, chief product officer, HARMAN International. "While there are many business use cases, enterprises are prioritizing these on a must-have immediately to implement basis." ... He also oversees AI implementation across his company. Technology leaders say it will take at least two to three years before AI becomes mainstream across the enterprise. Rakesh Jayaprakash, chief analytics evangelist at ManageEngine, told ISMG that we would start to see "very tangible results" at a larger scale in another one or two years. "Tangible results" refer to commoditization of AI, which accelerates the ROI, he said. "While there is a lot of hype around AI now, the true value comes when the organizations are able to see the outcomes," Jayaprakash said. "Right now, many organizations jump in with very high expectations of what is possible through AI, because we've started to use tools such as ChatGPT to accomplish very simple tasks. But when it comes to organization-level use cases, those are a little more complex."


Bridging the Data Gap: The Role of Industrial DataOps in Digital Transformation

One of the main issues faced by organizations is the lack of context in industrial data. Unlike IT systems, where data is typically well-defined and structured, data from industrial environments often lacks the necessary context to be useful. For example, a temperature reading from a manufacturing machine might be labeled simply as “temperature sensor 1,” leaving operators to guess its relevance without proper identification. This lack of contextualization—when applied to thousands of data points across multiple facilities— Is a major barrier to advanced analytics and digitalization programs. ... By implementing Industrial DataOps, organizations can address this gap by contextualizing data as close to the source as possible—ideally at the edge of the network. This approach empowers operators who have tribal knowledge of the data and its sources to deliver ready-to-use data to IT and line of business users in their organization. Decisions become faster and more informed. The ultimate goal is to transform raw data into valuable insights that drive operational improvements. ... As organizations adopt Industrial DataOps, they unlock the potential for rapid innovation. With a solid data management framework in place, OT teams can easily explore new use cases and validate hypotheses. 


Ensuring AI-readiness of Data Is a Long-term Commitment

Data becomes an intellectual property when one enters the world of GenAI, and it is the way with which one can customize algorithms to reflect the brand voice and deliver great client services. Keeping the scenario in mind, Birkhead states that modernizing data and ensuring its AI-readiness is a long-term commitment. While organizations can make incremental progress year after year, building an analytic factory to produce AI models that support the business takes strategy, investment, and an enabling leadership team. Highlighting JPMC’s data strategy, Birkhead states that the components include data design principles, operating models, principles around platforms, tooling, and capabilities. Additionally, talent, governance, data, and AI ethics also come into play, but the ultimate goal is to have incredibly high-quality data that is self-describing and understandable by both humans and machines. From Birkhead’s standpoint, to be AI-ready with data, organizations have to get data to a state where a data scientist, user, or AI researcher can go into a marketplace and understand everything about the data.


Business Etiquette Classes Boom as People Relearn How to Act at Work

Workers who had substantial professional experience before the pandemic, including managers and executives, still need help adapting to hybrid and remote work, Senning said. He has been coaching leaders on best practices for such things as communicating through your calendar and deciding whether to call, text or use Slack to reach an employee. stablishing etiquette for video meetings has also been a challenge for many firms, he notes. Bad behavior in virtual meetings has occasionally made headlines in recent years, such as the backlash against Vishal Garg, CEO of the mortgage lending firm Better.com, for announcing mass layoffs over Zoom ahead of the holidays in 2021. "If I had a magic button that I could push that could get people to treat video meetings with 50 percent of the same level of professionalism they treat an in-person meeting, I would make a lot of HR, personnel managers, and executives very, very happy," Senning said. Tech companies also are paying for etiquette and professionalism training for their workers, especially if they're bringing in employees who have never worked in person before, according to Crystal Bailey, director of the Etiquette Institute of Washington, who counts Amazon among her clients.


Exploring the Power of AI in Software Development - Part 1: Processes

AI holds the power to significantly enhance the requirement analysis and planning processes at the early stages of the software development life cycle (SDLC). It can analyze massive amounts of data in order to identify user needs and preferences, allowing developers to make informed decisions about features and functionality. ... AI can also look at coding rates per user story within an app architecture context and allow Product Managers to better determine project timelines and resource needs. In doing so, they can more accurately predict the risk-reward of time-to-market versus high quality for every release, knowing that no software will be 100% defect-free. ... With AI, you have a pair programmer who has infinite patience. Someone who will not judge you for seemingly "stupid" questions. Having this kind of support can increase an engineer's capabilities and productivity. So often as a junior engineer, I was afraid to ask the senior engineers on my team questions because I thought I should know the answer. Engineers can use AI without the worry of judgment, so no question is stupid, no answer should be known.


How AI is Shaping the Future of Product Development

Product testing and iteration processes are also being revolutionized by AI, which results in shorter development cycles and better product outcomes as well. While tried and true testing methods can work well, they often have long cycles or may miss problems. Quiet contrary to traditional testing, AI-driven automation suggests a new degree of efficiency and accuracy. AI tools for early-stage testing makes it possible to discover issues quickly and try out potential applications, which lowers the demand on manual resources spent in validating components or debugging. Not just that, AI's ability to analyze code bases comprehensively provides targeted insights for ongoing improvements. By integrating AI into testing processes, businesses can accelerate development cycles, reduce costs, and deliver products that better align with user expectations. ... By embedding AI into their growth strategies, companies can benefit in numerous ways. It allows for more targeted and personalized experience to be delivered, subsequently personalizing the products or services provided by companies. Such a custom-built solution not only enhances user experience but also helps create brand loyalty. Additionally, AI allows companies to have data-driven decision making that facilitates strategic planning and execution.


From Safety to Innovation: How AI Safety Institutes Inform AI Governance

According to the report, this “first wave” of AISIs has three common characteristics:Safety-focus: The first wave of AISIs was informed by the Bletchley AI Safety Summit, which declared that “AI should be designed, developed, deployed, and used in a manner that is safe, in such a way as to be human-centric, trustworthy, and responsible.” These institutes are particularly concerned with mitigating abuse and safeguarding frontier AI models. Government-led: These AISIs are governmental institutions, providing them with the “authority, legitimacy, and resources” needed to address AI safety issues. Their governmental status helps them access leading AI models to run evaluations, and importantly, it gives them greater leverage in negotiating with companies unwilling to comply. Technical: AISIs are focused on attracting technical experts to ensure an evidence-based approach to AI safety. The report also points out some key ways AISIs are unique. For one, AISIs are not a “catch-all” entity to tackle the complex and ever-evolving AI governance landscape. They are also relatively free of the bureaucracy commonly associated with governmental agencies. This may be due to the fact that these institutes have very little regulatory authority and focus more on establishing best practices and conducting safety evaluations to inform responsible AI development.


Current Top Trends in Data Analytics

One of the most impactful data analytics trends right now is the integration of AI and machine learning (ML) into analytics frameworks, observes Anil Inamdar, global head of data services at data monitoring and management firm Instaclustr by NetApp, an online interview. "We are seeing the emergence of a new data 4.0 era, which builds on previous shifts that focused on automation, competitive analytics, and digital transformation," Inamdar states. "This distinct new phase leverages AI/ML and generative AI to significantly enhance data analytics capabilities," he says. While the transformative potential is now here for the taking, enterprises must carefully strategize across several key areas. ... Data governance should be a top concern for all enterprises. "If it isn't yours, you’re heading for a world of hurt," warns Kris Moniz, national data and analytics practice lead for business and technology advisory firm Centric Consulting, via email. Data governance dictates the rules under which data should be managed, Moniz says. "It doesn’t just do this by determining who gets access to what," he notes. "It also does it by defining what your data is, setting processes that can guarantee its quality, building frameworks that align disparate systems across common domains, and setting standards for common data that all systems should consume."


Effective Data Mesh Begins Wtih Robust Data Governance

When implemented correctly, removing the dependency on centralised systems and IT teams can truly transform the way organisations operate. However, introducing a data mesh can also raise fears and concerns relating to storage, duplication, management, and compliance, all of which must be addressed if it is to succeed. With decentralised data management, it’s also critical that everyone follows the same stringent set of rules, particularly regarding the creation, storage, and protection of data. If not, issues will quickly arise. Additionally, if any team leaders or department heads put their own tools or processes in place, the results may cause far more problems than they solve. Trusting individuals to stick to data guidelines is too risky. Instead, adherence should be enforced in a way that ensures standards are followed, without impacting agility or frustrating users. This may sound impractical, but a computational governance approach can impose the necessary restrictions, while at the same time accelerating project delivery. Naturally, not everyone will be quick (or keen) to adjust, but with additional support and training even the most reluctant individuals can learn how to adopt a more entrepreneurial mindset.



Quote for the day:

"Trust is the lubrication that makes it possible for organizations to work." -- Warren G. Bennis

Daily Tech Digest - October 27, 2024

Who needs a humanoid robot when everything is already robotic?

The service sector will see a surge in delivery robots, streamlining last-mile package and food delivery logistics. Advanced cleaning robots will maintain both homes and commercial spaces. urgical robots performing minimally invasive procedures with high precision will benefit healthcare. Rehabilitation robots and exoskeletons will transform physical therapy and mobility, while robotic prosthetics will offer enhanced functionality to those who need them. At the microscopic level, nanorobots will revolutionize drug delivery and medical procedures. Agriculture will increasingly embrace harvesting and planting robots to automate crop management, with specialized versions for tasks like weeding and dairy farming. Autonomous vehicles and drone delivery systems will transform the transportation sector, while robotic parking solutions will optimize urban spaces. Military and defense applications will include reconnaissance drones, bomb disposal robots, and autonomous combat vehicles. Space exploration will continue to rely on advanced rovers, satellite-servicing robots, and assistants for astronauts on space stations. Underwater exploration robots and devices monitoring air and water quality will benefit environmental and oceanic research. 


Cybersecurity Isn't Easy When You're Trying to Be Green

Already, some green energy infrastructure has fallen prey to attackers. Charging stations for electric vehicles typically require connectivity, which makes them vulnerable to both compromise and disruption. In 2022, pro-Ukrainian hacktivists compromised chargers in Moscow to display messages of support for Ukraine. In 2019, a solar firm could no longer manage its 500 megawatts of wind and solar sites in the western US after a denial-of-service attack targeted an unpatched firewall, the FBI stated in a Private Industry Notification (PIN) in July. The risk could extend all the way to homeowners, who increasingly have adopted rooftop solar and need to be connected to be able to deliver their solar power and be credited. "This issue will only become more important as small solar systems continue to grow. When every house is a power plant, every house is a target," Morten Lund, of counsel for Foley & Lardner LLP, wrote in a brief directed at energy companies. "In many ways, the distributed nature of solar energy provides significant protection against catastrophic failures. But without sufficient protection at the project level, this strength quickly becomes a weakness."


A look at risk, regulation, and lock-in in the cloud

The threat here, if indeed it is a threat, is multifaceted. Firstly, financial implications can be significant. When a company heavily invests in a specific vendor’s ecosystem, the costs of migrating to a different provider, both in terms of money and resources, can be prohibitive. The reality is that any technology comes with a certain degree of lock-in. That is why I’m often amazed at enterprises that ask me for zero lock-in in any enterprise technology decision. It just does not exist. The question is how do we minimize the impact of the lock-in that any use of technology brings. This is something I explain extensively to enterprises. The risk is operational; dependencies on proprietary APIs and services might necessitate extensive application rewriting. ... Whether governmental regulation is a boon or a bane is a matter of perspective. On one side, it could enforce fairness, ensuring that no single provider exploits its position to the detriment of customers. Conversely, excessive regulation might stifle innovation and limit the aggressive evolution that characterizes the tech world. Also, we should consider that these regulations exist within one or a few countries, and as enterprises are now mostly international firms, that has less of the chilling effect that most expect.


Biometrics options expand, add more layers to secure financial services

The range of technologies being brought to bear against different fraud vectors also includes Herta’s biometrics being utilized by the EU’s EITHOS project to detect deepfakes, and age assurance and automated border control measures a pair of governments are looking into for contract opportunities. ... Mastercard is rolling out passkeys for payments in the Middle East and North Africa, following their launch in India. Starting with the noon Payments platform in the UAE, the Payment Passkey Service will by offered as a more secure alternative to OTPs at online checkouts. A Washington, D.C.-based think tank says America has a digital verification divide, due to the lack of documents possessed by low-income and marginalized people and the conflation of biometrics for ID verification with surveillance and law enforcement. Login.gov has helped less than it is supposed to so far, but evidence from ID.me suggests that the situation could be improved with biometrics. Panama has introduced a national digital ID and wallet for identity verification to access public and private services online. The digital ID is available to both citizens and permanent residents, and essentially digitizes the national ID card supplied by Mühlbauer and partners. 


AI Won’t Fix Your Software Delivery Problems

You can assess your personal productivity because it’s a feeling rather than a number. You don’t feel productive when dealing with busy work or handling constant interruptions. When you get a solid chunk of time to complete a task, you feel great. If an organization is interested in this kind of productivity, it should check in on employee satisfaction because people tend to be more satisfied when they can get things done. The State of DevOps report confirms this problem, as the high ratings for AI-driven productivity aren’t reducing toil work or improving software delivery performance, which we’ve long held to be a solid way for development teams to contribute to the organization’s goals. ... Given the intense focus on increasing the speed of coding, we’re likely seeing suboptimization on a massive scale. Writing code is rarely the bottleneck for feature development. Speeding up the code itself is less valuable if you aren’t catching the bugs it introduces with automated tests. It also fails to address the broader software delivery system or guarantee your features are useful to users. If you aren’t working at the constraint, your optimizations don’t improve throughput. In many cases, optimizing away from the constraint harms the end-to-end system.


The mainframe’s future in the age of AI

Running AI on mainframes as a trend is still in its infancy, but the survey suggests many companies do not plan to give up their mainframes even as AI creates new computing needs, says Petra Goude ... “AI can be assistive technology,” Dyer says. “I see it in terms of helping to optimize the code, modernize the code, renovate the code, and assist developers in maintaining that code.” ... “Many institutions are willing to resort to artificial intelligence to help improve outdated systems, particularly mainframes,” he says. “AI reduces the burden on several work phases, such as code rewriting or replacing databases, which streamlines the whole upgrading stage.” ... Many organizations have their mission-critical data residing on mainframes, and it may make sense to run AI models where that data resides, Dyer says. In some cases, that may be a better alternative than moving mission-critical data to other hardware, which may not be as secure or resilient, she adds. “You have both your customer data and then you have what I’ll call the operational data on the mainframe,” she says. “I can see the value of being able to develop and run your models directly right there, because you don’t have to move your data, you have very low latency, high throughput, all those things that you would want for certain types of AI applications.” 


How (and why) federated learning enhances cybersecurity

Federated learning’s popularity is rapidly increasing because it addresses common development-related security concerns. It is also highly sought after for its performance advantages. Research shows this technique can improve an image classification model’s accuracy by up to 20% — a substantial increase. ... Once the primary algorithm aggregates and weighs participants’ updates, it can be reshared for whatever application it was trained for. Cybersecurity teams can use it for threat detection. The advantage here is twofold — while threat actors are left guessing since they cannot easily exfiltrate data, professionals pool insights for highly accurate output. Federated learning is ideal for adjacent applications like threat classification or indicator of compromise detection. The AI’s large dataset size and extensive training build its knowledge base, curating expansive expertise. Cybersecurity professionals can use the model as a unified defense mechanism to protect broad attack surfaces. ML models — especially those that make predictions — are prone to drift over time as concepts evolve or variables become less relevant. With federated learning, teams could periodically update their model with varied features or data samples, resulting in more accurate, timely insights.


Augmented Reality's Healthcare Revolution

Many observers believe that AR's most immediate benefit will be in training both current and future healthcare professionals. "AR enables students to interact with virtual content in a real-world setting, providing contextualized learning experiences," Stegman says. Meanwhile, full virtual reality (VR), will offer a completely immersive training environment in which students can practice clinical skills without the risks associated with real patient care. ... As AR begins entering the healthcare mainstream, deep-pocketed large hospitals and specialized medical centers will most likely be the leading adopters, says SOTI's Anand. He reports that his firm's latest healthcare report found that 89% of US healthcare industry respondents agree that artificial intelligence simplifies tasks. "This gives a hint that healthcare organizations are already on the path to integrating advanced technologies," Anand notes. ... AR technology is rapidly evolving, and improvements in hardware (such as AR glasses and headsets), software, and integration with other medical technologies, are rapidly making AR more practical and effective. "As these technologies mature, they will become more accessible and affordable," Reitzel predicts.


Achieving peak cyber resilience

In a non-malicious, traditional disaster incident such as hardware failure or accidental deletion, the backup platform isn’t a target. Recovery is straightforward with a recent backup copy. You can quickly recover right back to the original location or an alternative location. In contrast, a cyberattack maliciously goes after anything and everything, making recovery complex. Backups are an especially attractive target for hackers because they represent an organization’s last line of defense. In a cyberattack scenario, the priority is containing the breach to stop further damage. Forensics teams must pinpoint how the attacker gained entry, find vulnerabilities and malware, and prevent reinfection by diagnosing which systems were potentially affected. Data decontamination is then needed to ensure threats aren’t reintroduced during recovery. Ransomware events can also necessitate coordination across IT disciplines, various business teams, legal, public, investor and government entities. Disaster recovery is likely something your organization deals with only infrequently. ... Cybercriminals have been enjoying the first-mover advantage in putting AI to work for their nefarious purposes. AI tools have allowed them to increase the frequency, speed and scale of their attacks. But now it’s time to fight fire with fire.


Who Are the AI Goliaths in the Banking Industry? A New Index Reveals a Growing Divide

In the Leadership pillar, banks have significantly increased their AI-related communications. The 50 Index banks published over 1,250 references to “AI” across annual reports, press releases, and company LinkedIn posts—representing a 59% increase year-over-year. This increase in “volume” was accompanied by an increase in “substance,” both across Investor Relations materials and in the engagement of Executive leaders across external media, industry conferences, and LinkedIn. As AI investments mature, the pressure is mounting for banks to demonstrate tangible returns. While 26 banks are now reporting outcomes from AI use cases, only 6 are disclosing financial impacts, and just two (DBS and JPMorgan Chase) are attempting to estimate total realized dollar outcomes across all AI investments. JPMorgan Chase, for instance, reported that the value they assign to their AI use cases is between $1 billion to $1.5 billion in fields such as customer personalization, trading, operational efficiencies, fraud detection, and credit decisioning. DBS, on the other hand, reported an economic value of SGD 370 million from its use of AI/ML in 2023, more than double the value from the previous year.



Quote for the day:

"The quality of leadership, more than any other single factor, determines the success or failure of an organization." -- Fred Fiedler & Martin Chemers