Daily Tech Digest - November 06, 2024

Enter the ‘Whisperverse’: How AI voice agents will guide us through our days

Within the next few years, an AI-powered voice will burrow into your ears and take up residence inside your head. It will do this by whispering guidance to you throughout your day, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot and prompting you with the name of a coworker you pass in the hall. It may even coach you as you hold conversations with friends and coworkers, or when out on dates, give you interesting things to say that make you seem smarter, funnier and more charming than you really are. ... Most of these devices will be deployed as AI-powered glasses because that form-factor gives the best vantage point for cameras to monitor our field of view, although camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world. Also, sensored glasses and earbuds will allow us to respond silently to our AI assistants with simple head nod gestures of agreement or rejection, as we naturally do with other people. ... On the other hand, deploying intelligent systems that whisper in your ears as you go about your life could easily be abused as a dangerous form of targeted influence.


How to Optimize Last-Mile Delivery in the Age of AI

Technology is at the heart of all advancements in last-mile delivery. For instance, a typical map application gives the longitude and latitude of a building — its location — and a central access point. That isn't enough data when it comes to deliveries. In addition to how much time it takes to drive or walk from point A to point B, it's also essential for a driver to understand what to do at point B. At an apartment complex, for example, they need to know what units are in each building and on which level, whether to use a front, back, or side entrance, how to navigate restricted or gated areas, and how to access parking and loading docks or package lockers. Before GenAI, third-party vendors usually acquired this data, sold it to companies, and applied it to map applications and routing algorithms to provide delivery estimates and instructions. Now, companies can use GenAI in-house to optimize routes and create solutions to delivery obstacles. Suppose the data surrounding an apartment complex is ambiguous or unclear. For instance, there may be conflicting delivery instructions — one transporter used a drop-off area, and another used a front door. Or perhaps one customer was satisfied with their delivery, but another parcel delivered to the same location was damaged or stolen. 


Cloud providers make bank with genAI while projects fail

Poor data quality is a central factor contributing to project failures. As companies venture into more complex AI applications, the demand for tailored, high-quality data sets has exposed deficiencies in existing enterprise data. Although most enterprises understood that their data could have been better, they haven’t known how bad. For years, enterprises have been kicking the data can down the road, unwilling to fix it, while technical debt gathered. AI requires excellent, accurate data that many enterprises don’t have—at least, not without putting in a great deal of work. This is why many enterprises are giving up on generative AI. The data problems are too expensive to fix, and many CIOs who know what’s good for their careers don’t want to take it on. The intricacies in labeling, cleaning, and updating data to maintain its relevance for training models have become increasingly challenging, underscoring another layer of complexity that organizations must navigate. ... The disparity between the potential and practicality of generative AI projects is leading to cautious optimism and reevaluations of AI strategies. This pushes organizations to carefully assess the foundational elements necessary for AI success, including robust data governance and strategic planning—all things that enterprises are considering too expensive and too risky to deploy just to make AI work.


Why cybersecurity needs a better model for handling OSS vulnerabilities

Identifying vulnerabilities and navigating vulnerability databases is of course only part of the dependency problem; the real work lies in remediating identified vulnerabilities impacting systems and software. Aside from general bandwidth challenges and competing priorities among development teams, vulnerability management also suffers from challenges around remediation, such as the real potential that implementing changes and updates can potentially impact functionality or cause business disruptions. ... Reachability analysis “offers a significant reduction in remediation costs because it lowers the number of remediation activities by an average of 90.5% (with a range of approximately 76–94%), making it by far the most valuable single noise-reduction strategy available,” according to the Endor report. While the security industry can beat the secure-by-design drum until they’re blue in the face and try to shame organizations into sufficiently prioritizing security, the reality is that our best bet is having organizations focus on risks that actually matter. ... In a world of competing interests, with organizations rightfully focused on business priorities such as speed to market, feature velocity, revenue and more, having developers quit wasting time and focus on the 2% of vulnerabilities that truly present risks to their organizations would be monumental.


The new calling of CIOs: Be the moral arbiter of change

Unfortunately, establishing a strategy for democratizing innovation through gen AI is far from straightforward. Many factors, including governance, security, ethics, and funding, are important, and it’s hard to establish ground rules. ... What’s clear is tech-led innovation is no longer the sole preserve of the IT department. Fifteen years ago, IT was often a solution searching for a problem. CIOs bought technology systems, and the rest of the business was expected to put them to good use. Today, CIOs and their teams speak with their peers about their key challenges and suggest potential solutions. But gen AI, like cloud computing before it, has also made it much easier for users to source digital solutions independently of the IT team. That high level of democratization doesn’t come without risks, and that’s where CIOs, as the guardians of enterprise technology, play a crucial role. IT leaders understand the pain points around governance, implementation, and security. Their awareness means responsibility for AI, and other emerging technologies have become part of a digital leader’s ever-widening role, says Rahul Todkar, head of data and AI at travel specialist Tripadvisor.


5 Strategies For Becoming A Purpose-Driven Leader

Purpose-driven leaders are fueled by more than sheer ambition; they are driven by a commitment to make a meaningful impact. They inspire those around them to pursue a shared purpose each day. This approach is especially powerful in today’s workforce, where 70% of employees say their sense of purpose is closely tied to their work, according to a recent report by McKinsey. Becoming a purpose-driven leader requires clarity, strategic foresight, and a commitment to values that go beyond the bottom line. ... Aligning your values with your leadership style and organizational goals is essential for authentic leadership. “Once you have a firm grasp of your personal values, you can align them with your leadership style and organizational goals. This alignment is crucial for maintaining authenticity and ensuring that your decisions reflect your deeper sense of purpose,” Blackburn explains. ... Purpose-driven leaders embody the values and behaviors they wish to see reflected in their teams. Whether through ethical decision-making, transparency, or resilience in the face of challenges, purpose-driven leaders set the tone for how others in the organization should act. By aligning words with actions, leaders build credibility and trust, which are the foundations of sustainable success.


Chaos Engineering: The key to building resilient systems for seamless operations

The underlying philosophy of Chaos Engineering is to encourage building systems that are resilient to failures. This means incorporating redundancy into system pathways, so that the failure of one path does not disrupt the entire service. Additionally, self-healing mechanisms can be developed such as automated systems that detect and respond to failures without the need for human intervention. These measures help ensure that systems can recover quickly from failures, reducing the likelihood of long-lasting disruptions. To effectively implement Chaos Engineering and avoid incidents like the payments outage, organisations can start by formulating hypotheses about potential system weaknesses and failure points. They can then design chaos experiments that safely simulate these failures in controlled environments. Tools such as Chaos Monkey, Gremlin, or Litmus can automate the process of failure injection and monitoring, enabling engineers to observe system behaviour in response to simulated disruptions. By collecting and analysing data from these experiments, organisations can learn from the failures and use these insights to improve system resilience. This process should be iterative, and organisations should continuously run new experiments and refine their systems based on the results.


Shifting left with telemetry pipelines: The future of data tiering at petabyte scale

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past. ... As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels based on its value and use case, enabling organizations to optimize both cost and performance.


A Transformative Journey: Powering the Future with Data, AI, and Collaboration

The advancements in industrial data platforms and contextualization have been nothing short of remarkable. By making sense of data from different systems—whether through 3D models, images, or engineering diagrams—Cognite is enabling companies to build a powerful industrial knowledge graph, which can be used by AI to solve complex problems faster and more effectively than ever before. This new era of human-centric AI is not about replacing humans but enhancing their capabilities, giving them the tools to make better decisions, faster. Without the buy in from the people who will be affected by any new innovation or technology the probability of success is unlikely. Engaging these individuals early on in the process to solve the issues they find challenging, mundane, or highly repetitive, is critical to driving adoption and creating internal champions to further catalyze adoption. In a fascinating case study shared by one of Cognite’s partners, we learned about the transformative potential of data and AI in the chemical manufacturing sector. A plant operator described how the implementation of mobile devices powered by Cognite’s platform has drastically improved operational efficiency. 


Four Steps to Balance Agility and Security in DevSecOps

Tools like OWASP ZAP and Burp Suite can be integrated into continuous integration/continuous delivery (CI/CD) pipelines to automate security testing. For example, LinkedIn uses Ansible to automate its infrastructure provisioning, which reduces deployment times by 75%. By automating security checks, LinkedIn ensures that its rapid delivery processes remain secure. Automating security not only enhances speed but also improves the overall quality of software by catching issues before they reach production. Automated tools can perform static code analysis, vulnerability scanning and penetration testing without disrupting the development cycle, helping teams deploy secure software faster. ... As organizations look to the future, artificial intelligence (AI) and machine learning (ML) will play a crucial role in enhancing both security and agility. AI-driven security tools can predict potential vulnerabilities, automate incident response and even self-heal systems without human intervention. This not only improves security but also reduces the time spent on manual security reviews. AI-powered tools can analyze massive amounts of data, identifying patterns and potential threats that human teams may overlook. This can reduce downtime and the risk of cyberattacks, ultimately allowing organizations to deploy faster and more securely.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss

Daily Tech Digest - November 05, 2024

GenAI in healthcare: The state of affairs in India

Currently, the All-India Institute of Medical Sciences (AIIMS) Delhi is the only public healthcare institution exploring AI-driven solutions. AIIMS, in collaboration with the Ministry of Electronics & Information Technology and the Centre for Development of Advanced Computing (C-DAC) Pune, launched the iOncology.ai platform to support oncologists in making informed cancer treatment decisions. The platform uses deep learning models to detect early-stage ovarian cancer, and available data shows this has already improved patient outcomes while reducing healthcare costs. This is one of the few key AI-driven initiatives in India. Although AI adoption in the healthcare provider segment is relatively high at 68%, a large portion of deployments are still in the PoC phase. What could transform India’s healthcare with Generative AI? What could help bring care to those who need it most? ... India has tremendous potential in machine intelligence, especially as we develop our own Gen AI capabilities. In healthcare, however, the pace of progress is hindered by financial constraints and a shortage of specialists in the field. Concerns over data breaches and cybersecurity incidents also contribute to this aversion. 


OWASP Beefs Up GenAI Security Guidance Amid Growing Deepfakes

To help organizations develop stronger defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a trio of guidance documents for security organizations on Oct. 31. To its previously released AI cybersecurity and governance checklist, the group added a guide for preparing for deepfake events, a framework to create AI security centers of excellence, and a curated database on AI security solutions. ... The trajectory of deepfakes is quite easy to predict — even if they are not good enough to fool most people today, they will be in the future, says Eyal Benishti, founder and CEO of Ironscales. That means that human training will likely only go so far. AI videos are getting eerily realistic, and a fully digital twin of another person controlled in real time by an attacker — a true "sock puppet" — is likely not far behind. "Companies want to try and figure out how they get ready for deepfakes," he says. "The are realizing that this type of communication cannot be fully trusted moving forward, which ... will take people some time to realize and adjust." In the future, since the telltale artifacts will be gone, better defenses are necessary, Exabeam's Kirkwood says.


Open-source software: A first attempt at organization after CRA

The Cyber Resilience Act was a shock that awakened many people from their comfort zone: How dare the “technical” representatives of the European Union question the security of open-source software? The answer is very simple: because we never told them, and they assumed it was because no one was concerned about security. ... The CRA requires software with automatic updates to roll out security updates automatically by default, while allowing users to opt out.  Companies must conduct a cyber risk assessment before a product is released and throughout 10 years or its expected lifecycle, and must notify the EU cybersecurity agency ENISA of any incidents within 24 hours of becoming aware of them, as well as take measures to resolve them. In addition to that, software products must carry the CE marking to show that they meet a minimum level of cybersecurity checks. Open-source stewards will have to care about the security of their products but will not be asked to follow these rules. In exchange, they will have to improve the communication and sharing of best security practices, which are already in place, although they have not always been shared. So, the first action was to create a project to standardize them, for the entire open-source software industry.


10 ways hackers will use machine learning to launch attacks

Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, a former EY partner. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.” These services are specifically being advertised as using machine learning, and it’s probably not just marketing. “The proof is in the pudding,” Malone says. “They’re definitely better.” ... Criminals are also using machine learning to get better at guessing passwords. “We’ve seen evidence of that based on the frequency and success rates of password guessing engines,” Malone says. Criminals are building better dictionaries to hack stolen hashes. They’re also using machine learning to identify security controls, “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.” ... The most frightening use of artificial intelligence are the deep fake tools that can generate video or audio that is hard to distinguish from the real human. “Being able to simulate someone’s voice or face is very useful against humans,” says Montenegro.


Breaking Free From the Dead Zone: Automating DevOps Shifts for Scalable Success

If ‘Shift Left’ is all about integrating processes closer to the source code, ‘Shift Right’ offers a complementary approach by tackling challenges that arise after deployment. Some decisions simply can’t be made early in the development process. For example, which cloud instances should you use? How many replicas of a service are necessary? What CPU and memory allocations are appropriate for specific workloads? These are classic ‘Shift Right’ concerns that have traditionally been managed through observability and system-generated recommendations. Consider this common scenario: when deploying a workload to Kubernetes, DevOps engineers often guess the memory and CPU requests, specifying these in YAML configuration files before anything is deployed. But without extensive testing, how can an engineer know the optimal settings? Most teams don’t have the resources to thoroughly test every workload, so they make educated guesses. Later, once the workload has been running in production and actual usage data is available, engineers revisit the configurations. They adjust settings to eliminate waste or boost performance, depending on what’s needed. It’s exhausting work and, let’s be honest, not much fun.


5 cloud market trends and how they will impact IT

“Capacity growth will be driven increasingly by the even larger scale of those newly opened data centers, with generative AI technology being a prime reason for that increased scale,” Synergy Research writes. Not surprisingly, the companies with the broadest data center footprint are Amazon, Microsoft, and Google, which account for 60% of all hyperscale data center capacity. And the announcements from the Big 3 are coming fast and furious. ... “In effect, industry cloud platforms turn a cloud platform into a business platform, enabling an existing technology innovation tool to also serve as a business innovation tool,” says Gartner analyst Gregor Petri. “They do so not as predefined, one-off, vertical SaaS solutions, but rather as modular, composable platforms supported by a catalog of industry-specific packaged business capabilities.” ... There are many reasons for cloud bills increasing, beyond simple price hikes. Linthicum says organizations that simply “lifted and shifted” legacy applications to the public cloud, rather than refactoring or rewriting them for cloud optimization, ended up with higher costs. Many organizations overprovisioned and neglected to track cloud resource utilization. On top of that, organizations are constantly expanding their cloud footprint.


The Modern Era of Data Orchestration: From Data Fragmentation to Collaboration

Data systems have always needed to make assumptions about file, memory, and table formats, but in most cases, they've been hidden deep within their implementations. A narrow API for interacting with a data warehouse or data service vendor makes for clean product design, but it does not maximize the choices available to end users. ... In a closed system, the data warehouse maintains its own table structure and query engine internally. This is a one-size-fits-all approach that makes it easy to get started but can be difficult to scale to new business requirements. Lock-in can be hard to avoid, especially when it comes to capabilities like governance and other services that access the data. Cloud providers offer seamless and efficient integrations within their ecosystems because their internal data format is consistent, but this may close the door on adopting better offerings outside that environment. Exporting to an external provider instead requires maintaining connectors purpose-built for the warehouse's proprietary APIs, and it can lead to data sprawl across systems. ... An open, deconstructed system standardizes its lowest-level details. This allows businesses to pick and choose the best vendor for a service while having the seamless experience that was previously only possible in a closed ecosystem.


New OAIC AI Guidance Sharpens Privacy Act Rules, Applies to All Organizations

The new AI guidance outlines five key takeaways that require attention, and though the term “guidance” is used some of these constitute expansions of application of existing rules. The first of these is that Privacy Act requirements for personal information apply to AI systems, both in terms of user input and what the system outputs. ... The second AI guidance takeaway stipulates that privacy policies must be updated to have “clear and transparent” information about public-facing AI use. The third takeaway notes that the generation of images of real people, whether it be due to a hallucination or intentional creation of something like a deepfake, are also covered by personal information privacy rules. The fourth AI guidance takeaway states that any personal information input into AI systems can only be used for the primary purpose for which it was collected, unless consent is collected for other uses or those secondary uses can be reasonably expected to be necessary. The fifth and final takeaway is perhaps a case of burying the lede; the OAIC simply suggests that organizations not collect personal information through AI systems at all due to the ” significant and complex privacy risks involved.”


DevOps Moves Beyond Automation to Tackle New Challenges

“The future of DevOps is DevSecOps,” Jonathan Singer, senior product marketing manager at Checkmarx, told The New Stack. “Developers need to consider high-performing code as secure code. Everything is code now, and if it’s not secure, it can’t be high-performing,” he added. Checkmarx is an application security vendor that allows enterprises to secure their applications from the first line of code to deployment in the cloud, Singer said. The DevOps perspective has to be the same as the application security perspective, he noted. Some people think of seeing the environment around the app, but Checkmarx thinks of seeing the code in the application and making sure it’s safe and secure when it’s deployed, he added. “It might look like the security teams are giving more responsibility to the dev teams, and therefore you need security people in the dev team,” Singer said Checkmarx is automating the heavy mental lifting by prioritizing and triaging scan results. With the amount of code, especially for large organizations, finding ten thousand vulnerabilities is fairly common, but they will have different levels of severity. If a vulnerability is not exploitable, you can knock it out of the results list. “Now we’re in the noise reduction game,” he said.


How Quantum Machine Learning Works

While quantum computing is not the most imminent trend data scientists need to worry about today, its effect on machine learning is likely to be transformative. “The really obvious advantage of quantum computing is the ability to deal with really enormous amounts of data that we can't really deal with any other way,” says Fitzsimons. “We've seen the power of conventional computers has doubled effectively every 18 months with Moore's Law. With quantum computing, the number of qubits is doubling about every eight to nine months. Every time you add a single qubit to a system, you double its computational capacity for machine learning problems and things like this, so the computational capacity of these systems is growing double exponentially.” ... Quantum-inspired software techniques can also be used to improve classical ML, such as tensor networks that can describe machine learning structures and improve computational bottlenecks to increase the efficiency of LLMs like ChatGPT. “It’s a different paradigm, entirely based on the rules of quantum mechanics. It’s a new way of processing information, and new operations are allowed that contradict common intuition from traditional data science,” says Orús.



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - November 04, 2024

How AI Is Driving Data Center Transformation - Part 3

According to AFCOM's 2024 State of Data Center Report, AI is already having a major influence on data center design and infrastructure. Global hyperscalers and data center service providers are increasing their capacity to support AI workloads. This has a direct impact on power and cooling requirements. In terms of power, the average rack density is expected to rise from 8.5 kW per rack in 2023 to 12 kW per rack by the end of 2024, with 55% of respondents expecting higher rack density in the next 12 to 36 months. As GPUs are fitted into these racks, servers will generate more heat, increasing both power and cooling requirements. The optimal temperature for operating a data center hall is between 21 and 24°C (69.8 - 75.2°F), which means that any increase in rack density must be accompanied by improvements in cooling capabilities. ... The efficiency of a data center is measured by a metric called power usage efficiency, PUE, which is the ratio of the total amount of power used by a data center to the power used by its computing equipment. To be more efficient, data center providers aim to reduce their PUE rating and bring it closer to 1. A way to achieve that is to reduce the power consumed by the cooling units through advanced cooling technologies.


The Intellectual Property Risks of GenAI

Boards and C-suites that have not yet had discussions about the potential risks of GenAI need to start now. “Employees can use and abuse generative AI even when it is not available to them as an official company tool. It can be really tempting for a junior employee to rely on ChatGPT to help them draft formal-sounding emails, generate creative art for a PowerPoint presentation and the like. Similarly, some employees might find it too tempting to use their phone to query a chatbot regarding questions that would otherwise require intense research,” says Banner Witcoff’s Sigmon. “Since such uses don’t necessarily make themselves obvious, you can’t really figure out if, for example, an employee used generative AI to write an email, much less if they provided confidential information when doing so. This means that companies can be exposed to AI-related risk even when, on an official level, they may not have adopted any AI.” ... “As is the case with the use of technology within any large organization, successful implementation involves a careful and specific evaluation of the tech, the context of use, and its wider implications including intellectual property frameworks, regulatory frameworks, trust, ethics and compliance,” says Raeburn in an email interview. 


The 10x Developer vs. AI: Will Tech’s Elite Coder Be Replaced?

We’re seeing AI tools that can smash out complex coding tasks in minutes and take even your best senior devs’ hours. At Cosine, we’ve seen this firsthand with our AI, Genie. Many of the tasks we tested were in the four to six-hour range, and Genie could complete them in four to six minutes. It’s a genuine superhuman thing to be able to solve problems that quickly. But here’s where it gets interesting. This isn’t just about raw output. The real mind-bender is that AI is starting to think like an engineer. It’s not just spitting out code — it’s solving problems. ... Suppose we’re looking slightly more pragmatically at what AI could signal for career progression. In that case, there is a counterargument that junior developers won’t be exposed to the same level of problem-solving or acquire the same skill sets, given the availability of AI. This creates a complete headache for HR. How do you structure career progression when the traditional markers of seniority — years of experience, deep technical knowledge — might not mean as much? I think we’ll see a shift in focus. Companies will probably lean more on whether you fulfilled your sprint objectives and shipped what you wanted on time instead of going deeper. As for the companies themselves? Those who don’t get on board with AI coding tools will get left in the dust.


The 5 gears of employee well-being

Ritika is of view that managing employees’ and organisational expectations requires clear communication from the leadership. “It offers employees a transparent view of the organisation's direction and highlights how their contributions drive Amway's success and growth. Our leadership prioritises transparency, ensuring that employees have a clear understanding of the organisation’s direction and how their individual and collaborative efforts contribute to collective goals. This approach fosters a strong sense of purpose and engagement while aligning with the vision and desired culture of the company.” She further calls for having a robust feedback mechanism that allows employees an opportunity to share their honest feedback on areas that matter the most and the ones that impact them. “We believe in the feedback flywheel, our bi-annual culture and employee engagement survey allow employees an opportunity to share feedback. Each feedback is followed by a cycle of sharing results and action planning.” She further adds that frequent check-in conversations between the upline and team members ensure there is clarity of expectations, our performance management system ensures there are 3 formal check-in conversations that are focused on coaching and development and not ‘judgement’. 


Agentic AI swarms are headed your way

OpenAI launched an experimental framework last month called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI. Swarm is not a product. It’s an experimental tool for coordinating or orchestrating networks of AI agents. The framework is open-source under the MIT license, and available on GitHub. ... One way to look at agentic AI swarming technology is that it’s the next powerful phase in the evolution of generative AI (genAI). In fact, Swarm is built on OpenAI’s Chat Completions API, which uses LLMs like GPT-4. The API is designed to facilitate interactive “conversations” with AI models. It allows developers to create chatbots, interactive agents, and other applications that can engage in natural language conversations. Today, developers are creating what you might call one-off AI tools that do one specific task. Agentic AI would enable developers to create a large number of such tools that specialize in different specific tasks, and then enable each tool to dragoon any others into service if the agent decides the task would be better handled by the other kind of tool.


How To Develop Emerging Leaders In Your Organization

Mentorship and coaching are critical for unlocking the leadership potential of emerging talent. By pairing less experienced employees with seasoned leaders, companies provide invaluable hands-on learning experiences beyond formal training programs. These relationships allow future leaders to observe high-level decision-making in action, receive personalized feedback, and cultivate their leadership instincts in real-world scenarios. ... While technical skills are essential, leadership success depends heavily on soft skills like emotional intelligence, communication, and adaptability. These skills help leaders navigate team dynamics, inspire trust, and handle organizational challenges with confidence. Workshops, problem-solving exercises, and leadership programs are effective for developing these abilities. ... Leadership development can’t happen in a vacuum. One of the most effective ways to accelerate growth is through “stretch assignments,” opportunities that push employees beyond their comfort zones by challenging them with responsibilities that test their leadership abilities. These assignments expose future leaders to high-stakes decision-making, cross-functional collaboration, and strategic thinking, all of which prepare them for the demands of more senior roles.


CIOs look to sharpen AI governance despite uncertainties

There is no dearth of AI governance frameworks available from the US government and European Union, as well as top market researchers, but no doubt, as gen AI innovation outpaces formal standards, CIOs will need to enact and hone internal AI governance policies in 2025 — and enlist the entire C-suite in the process to ensure they are not on the hook alone, observers say. ... “Governance is really about listening and learning from each other as we all care about the outcome, but equally as important, howwe get to the outcome itself,” Williams says. “Once you cross that bridge, you can quickly pivot into AI tools and the actual projects themselves, which is much easier to maneuver.” TruStone Financial Credit Union is also grappling with establishing a comprehensive AI governance program as AI innovation booms. “New generative AI platforms and capabilities are emerging every week. When we discover them, we block access until we can thoroughly evaluate the effectiveness of our controls,” says Gary Jeter, EVP and CTO at TruStone, noting, as an example, that he decided to block access to Google’s NotebookLM initially to assess its safety. Like many enterprises, TruStone has deployed a companywide generative AI platform for policies and procedures branded as TruAssist.


Design strategies in the white space ecosystem

AI compute cabinets can weigh up to 4,800 pounds, raising concerns about floor load capacity. Raised floors offer flexibility for cabling, cooling, and power management but may struggle with the weight demands of high-density setups. Slab floors are sturdier but come with their own design and cost challenges, particularly for liquid cooling, which can pose risks if leaks occur. This isn’t just a financial concern – it’s also about safety. “As we integrate various trades and systems into the same space with multiple teams working alongside each other, safety becomes paramount. Proper structural load assessments and seismic bracing, especially in earthquake-prone areas, are essential to ensure the raised floor can handle the weight,” Willis emphasizes. ... As the landscape of high-performance computing continues to grow and evolve, so too do the designs of data center cabinets. These changes are driven by the need for deeper and wider cabinets that can support a greater number of power distribution units (PDUs) and cabling. The emphasis is not just on accommodating equipment, but also on optimizing space and power capacity to avoid the network distance limitations that can arise when cabinets become too wide.


Costly and struggling: the challenges of legacy SIEM solutions

The main problem organizations face with legacy SIEM systems is the massive amount of unstructured data they produce, making it hard to spot signs of advanced threats such as ransomware and advanced persistent threat groups. “These systems were built primarily to detect known threats using signature-based approaches, which are insufficient against today’s sophisticated, constantly evolving attack techniques,” Young says. “Modern threats often employ subtle tactics that require advanced analytics, behavior-based detection, and proactive correlation across multiple data sources — capabilities that many legacy SIEMs lack. In addition, legacy SIEM systems typically don’t support automated threat intelligence feeds, which are crucial for staying ahead of emerging threats, according to Young. “They also lack the ability to integrate with security orchestration, automation, and response tools, which help automate responses and streamline incident management.” Without these modern features, legacy SIEMs often miss important warning signs of attacks and have trouble connecting different threat signals, making organizations more exposed to complex, multi-stage attacks. Mellen says SIEMS are only as good as the work that companies put into them, which is the predominant feedback she’s received over the years from many practitioners.


Why Effective Fraud Prevention Requires Contact Data Quality Technology

From our experience the quality of contact data is essential to the effectiveness of ID processes, influencing everything from end-to-end fraud prevention to delivering simple ID checks; meaning more advanced and costly techniques, like biometrics and liveness authentication, may not be necessary. The verification process becomes more reliable when a customer’s contact information, such as name, address, email and phone number, are accurate. With this data ID verification technology can then confidently cross-reference the provided information against official databases or other authoritative sources, without discrepancies that could lead to false positives or negatives. A growing issue is fraudsters exploiting inaccuracies in contact data to create false identities and manipulate existing ones. By maintaining clean and accurate contact data ID verification systems can more effectively detect suspicious activity and prevent fraud. For example, inconsistencies in a user’s phone or email, or an address linked to multiple identities, could serve as a red flag for additional scrutiny.



Quote for the day:

“Disagree and commit is a really important principle that saves a lot of arguing.” -- Jeff Bezos

Daily Tech Digest - November 03, 2024

How AI-Powered Vertical SaaS Is Taking Over Traditional Enterprise SaaS

Enterprise decision-makers no longer care about the underlying technology itself—they care about what it delivers. They care about tangible outcomes like cost savings, operational efficiencies, and improved customer experiences. This shift in focus is causing companies to rethink their approach to enterprise software. ... Unlike traditional SaaS, which is built for broad use cases, vertical SaaS is deeply tailored to specific industries. By using AI, it can offer real-time insights, automation, and optimisations that solve problems unique to each sector. ... This hyper-targeted approach allows vertical SaaS to deliver tangible business outcomes rather than generic efficiencies. AI powers this shift by enabling platforms to adapt to industry-specific challenges, automate routine tasks, and provide insights at a scale and speed that was previously unattainable. Think of traditional SaaS like a Swiss Army knife — versatile, but not always the best tool for a specific task. vertical SaaS, however, is like a surgeon’s scalpel or a craftsman’s chisel — precisely designed for a specific job, delivering results with pinpoint accuracy and efficiency. What would you rather use for mission-critical work: a multi-tool that does everything adequately or an instrument built to perform one task perfectly?


Ending Microservices Chaos: How Architecture Governance Keeps Your Microservices on Track

With proper software architecture governance, you can reduce microservices complexity, ramp up developers faster, reduce MTTR, and improve the resiliency of your system, all while building a culture of intentionality. ... In addition to controlling the chaos of microservices with governance and observability, maintaining a high standard of security and code quality is essential. When working with distributed systems, the complexity of microservices — if left unchecked — can lead to vulnerabilities and technical debt. ... Tools from SonarSource — such as SonarLint or SonarQube — focus on continuous code quality and security. They help developers identify potential issues such as code smells, duplication, or even security risks like SQL injection. By integrating seamlessly with CI/CD pipelines, they ensure that every deployment follows strict security and code quality standards. The connection between code quality, application security, and architectural observability is clear. Poor code quality and unresolved vulnerabilities can lead to a fragile architecture that is prone to outages and security incidents. By proactively managing your code quality and security using these tools, you reduce the risk of microservices complexity spiraling out of control.


What is quiet leadership? Examples, traits & benefits

Quiet leadership is a leadership style defined by empathy, creativity, active listening, and attention to detail. It focuses on collaboration and communication instead of control. At its core is quiet confidence, not arrogance. Quiet leaders prefer to solve problems through teamwork and encouragement, not aggression. They are compassionate, understanding, open, and approachable. Most importantly, they earn their team’s respect instead of demanding it. ... Instead of criticizing yourself for not being an extroverted leader, embrace who you are. Don’t try to be someone you’re not. You might wonder if a quiet style can work because of leadership stereotypes. But in reality, it can be comforting to others. Build self-awareness and notice how you positively impact people. By accepting your unique leadership style, you’ll find what works best for you and your team. If you use your strengths, being a quiet leader can be a superpower. For example, quiet leaders are great listeners. Active listening is rare, so be proud if you have that skill. ... As a quiet leader, you’ll need to step outside your comfort zone at times. This can be exhausting, so make time to recharge and regain energy. 


From Code To Conscience: Humanities’ Role In Fintech’s Evolution

Reflecting on the day, it became clear that studying for a career in fintech—or any technology field—is not just about understanding mechanics; it’s about grasping the bigger picture and realizing the power of technology to serve people, not just profit. In a sector as influential as fintech, this balanced approach is crucial. A humanities background fosters exactly the kind of critical, thoughtful perspective that today’s technology fields demand. Combining technical knowledge with grounding in ethics, history, and critical problem-solving will be essential for tomorrow’s leaders, especially as fintech continues to shape societal norms and economic structures. The Pace of Fintech conference underscored how the intersection of AI, fintech, and the humanities is shaping a more thoughtful future for technology. Artificial intelligence, while transformative, requires a balance between innovation and ethics—an understanding of both its potential and its responsibilities. Humanities-trained thinkers bring crucial perspectives to this field, prompting questions about fairness, transparency, and societal impact that purely technical approaches may overlook.


Overcoming data inconsistency with a universal semantic layer

As if the data landscape weren’t complex enough, data architects began implementing semantic layers within data warehouses. Architects might think of the data assets they manage as the single source of truth for all use cases. However, that is not typically the case because millions of denormalized table structures are typically not “business-ready.” When semantic layers are embedded within various warehouses, data engineers must connect analytics use cases to data by designing and maintaining data pipelines with transforms that create “analytics-ready” data. ... What is needed is a universal semantic layer that defines all the metrics and metadata for all possible data experiences: visualization tools, customer-facing analytics, embedded analytics, and AI agents. With a universal semantic layer, everyone across the business agrees on a standard set of definitions for terms like “customer” and “lead,” as well as standard relationships among the data (standard business logic and definitions), so data teams can build one consistent semantic data model. A universal semantic layer sits on top of data warehouses, providing data semantics (context) to various data applications. It works seamlessly with transformation tools, allowing businesses to define metrics, prepare data models, and expose them to different BI and analytics tools.


Server accelerator architectures for a wide range of applications

The highest-performing architecture for AI performance is a system that allows the accelerators to communicate with each other without having to communicate back to the CPU. This type of system requires that the accelerators be mounted on their own baseboard with a high-speed switch on the baseboard itself. The initial communication that initializes the application that runs on the accelerators is over a PCIe path. When completed, the results are then also sent back to the CPU over PCIe. The CPU-to-accelerator communication should be limited, allowing the accelerators to communicate with each other over high-speed paths. A request from one accelerator is made directly or through a non-blocking switch (4 of them) and sent to the appropriate GPU. The performance of GPU to GPU is significantly higher than using the PCIe path, which allows for applications to use more than one GPU for an application without the need to interact with the CPU over the relatively slow PCIe lanes. ... A common and well-defined interface between CPUs and accelerators is to communicate over PCIe lanes. This architecture allows for various configurations in the server and the number of accelerators. 


AI Testing: More Coverage, Fewer Bugs, New Risks

The productivity gains from AI in testing are substantial. We now have a vast international bank that we have helped leverage our solution to such an extent it managed to increase test automation coverage across two of its websites (supporting around ten different languages), taking it from a mere forty percent to almost ninety percent in a matter of weeks. I believe this is an amazing achievement, not only because of the end results but also because working in an enterprise environment with its security and integrations can typically take forever. While traditional test automation might be limited to a single platform or language and the capacity of one person, AI-enhanced testing breaks these limitations. Testers can now create and execute tests on any platform (web, mobile, desktop), in multiple languages, and with the capacity of numerous testers. This amplifies testing capabilities and introduces a new level of flexibility and efficiency. ... Upskilling QA teams with AI brings the significant advantage of multilingual testing and 24/7 operation. In today’s global market, software products must often cater to diverse users, requiring testing in multiple languages. AI makes this possible without requiring testers to know each language, expanding the reach and usability of software products.


Why Great Leaders Embrace Broad Thinking — and How It Transforms Organizations

Broad thinking starts with employing three behaviors. First, spend time following your thoughts in an exploratory way rather than simply trying to find an answer or idea and moving on. Second, look at things from different angles and consider a wide range of options carefully before acting. Third, consistently consider the bigger picture and resist getting caught up in the smaller details. ... Companies want action. They don't want employees sitting around wringing their hands, frozen with indecision. They also don't want employees overanalyzing decisions to the point of inertia. Therefore, they often train employees to make decisions faster and more efficiently. However, decisions made for speed don't always make for great decisions. Especially seemingly simple ones that have larger downstream ramifications. ... Broad thinking considers the parts as being inseparable from the whole. The elephant parts are inseparable from the entire animal, just like the promotional campaign was inseparable from the other aspects of the organization it impacted. When you broaden your perspective, you also become more sensitive to subtleties of differentiation: how elements that are seemingly irrelevant, extraneous, or opposites can interconnect.


How Edge Computing Is Enhancing AI Solutions

Edge computing enhances the privacy and security of AI solutions by keeping sensitive data local rather than transmitting it to centralized cloud servers. Such an approach is most advantageous in industries such as managing and providing healthcare where privacy is of high value, especially in regards to patient information. By processing medical images or patient records at the edge, healthcare providers can ensure compliance with data protection regulations while still leveraging AI for improved diagnostics and treatment planning. Furthermore, edge AI minimizes the number of exposed data points that can be attacked through the networks by translating data tasks into localized subsets. ... As the volume of data generated by IoT devices continues to grow exponentially, transmitting all this information to the cloud for processing becomes increasingly impractical and expensive. This problem is solved in edge computing by sorting and analyzing data. This approach has dramatic effects in reducing the bandwidth required and the overall costs attached to it and in addition enhancing the system performance.


Why being in HR is getting tougher—and how to break through

The HR function lives in the friction between caring for the employee and caring for the organization. HR’s role is to represent the best interests of the organizations we work for and deliver care to employees for their end-to-end life cycle at those organizations. When you live in that friction, at times, you’re underdelivering that care to employees. At this moment—when employees’ needs are at an all-time high and organizations are struggling with costs and resetting around historical growth expectations—that gap is even wider than during less volatile times. There’s also an assumption that the employees’ interests and the company’s interests aren’t aligned—when many times they are. I have several tools to help people when they’re struggling. We can get a little bit caught up in the myths and expectations of people wanting too much, and that’s where the HR professional has to pull back and say, “This is what I can do, and it’s actually quite good.” ... Trust is hard earned but can go away in a second. And it can go away in a second because of HR but also, unfortunately, because of business leaders. 



Quote for the day:

"You can't be a leader if you can't influence others to act." -- Dale E. Zand

Daily Tech Digest - November 02, 2024

Cisco takes aim at developing quantum data center

On top of the quantum network fabric effort, Cisco is developing a software package that includes the best way for entanglement, distribution effort, protocol, and routing algorithms, which the company is building in a protocol stack and compiler, called Quantum Orchestra. “We are developing a network-aware quantum orchestrator, which is this general framework that takes quantum jobs in terms of quantum circuits as an input, as well as the network topology, which also includes how and where the different quantum devices are distributed inside the network,” said Hassan Shapourian, Technical Leader, Cisco Outshift. “The orchestrator will let us modify a circuit for better distributability. Also, we’re going to decide which logical [quantum variational circuit] QVC to assign to which quantum device and how it will communicate with which device inside a rack.” “After that we need to schedule a set of switch configurations to enable end-to-end entanglement generations [to ensure actual connectivity]. And that involves routing as well as resource management, because, we’re going to share resources, and eventually the goal is to minimize the execution time or minimize the switching events, and the output would be a set of instructions to the switches,” Shapourian said.


How CIOs Can Fix Data Governance For Generative AI

When you look at it from a consumption standpoint, the enrichment of AI happens as you start increasing the canvas of data it can pick up, because it learns more. That means it needs very clean information. It needs [to be] more accurate, because you push in something rough, it’s going to be all trash. Traditional AI ensured that we have started cleaning the data, and metadata told us if there is more data available. AI has started pushing people to create more metadata, classification, cleaner data, reduce duplicates, ensure that there is a synergy between the sets of the data, and they’re not redundant. It’s cleaner, it’s more current, it’s real-time. Gen AI has gone a step forward. If you want to contextually make it rich, you want to pull in more RAGs into these kinds of solutions, you need to know exactly where the data sits today. You need to know exactly what is in the data to create a RAG pipeline, which is clean enough for it to generate very accurate answers. Consumption is driving behavior. In multiple ways, it is actually driving organizations to start thinking about categorization, access controls, governance. [An AI platform] also needs to know the history of the data. All these things have started happening now to do this because this is very complex.


Here’s the paper no one read before declaring the demise of modern cryptography

With no original paper to reference, many news outlets searched the Chinese Journal of Computers for similar research and came up with this paper. It wasn’t published in September, as the news article reported, but it was written by the same researchers and referenced the “D-Wave Advantage”—a type of quantum computer sold by Canada-based D-Wave Quantum Systems—in the title. Some of the follow-on articles bought the misinformation hook, line, and sinker, repeating incorrectly that the fall of RSA was upon us. People got that idea because the May paper claimed to have used a D-Wave system to factor a 50-bit RSA integer. Other publications correctly debunked the claims in the South China Morning Post but mistakenly cited the May paper and noted the inconsistencies between what it claimed and what the news outlet reported. ... It reports using a D-Wave-enabled quantum annealer to find “integral distinguishers up to 9-rounds” in the encryption algorithms known as PRESENT, GIFT-64, and RECTANGLE. All three are symmetric encryption algorithms built on a SPN—short for substitution-permutation network structure.


AI Has Created a Paradox in Data Cleansing and Management

When asked about the practices required to maintain a cleansed data set, Perkins-Munn states that in that state, it is critical to think about enhancing data cleaning and quality management. Delving further, she states that there are many ways to maintain it over time and discusses a few that include AI algorithms revolving around automated data profiling and anomaly detection. Particularly in the case of unsupervised learning models, AI algorithms automatically profile data sets and detect anomalies or outliers. Continuous data monitoring is one ongoing way to keep data clean. She also mentions intelligent data matching and deduplication, wherein machine learning algorithms improve the accuracy and efficiency of data matching and duplication processes. Apart from those, there are fuzzy matching algorithms that can identify and merge duplicate records even with minimal variations or errors. Moving forward, Perkins-Munn states that for effective data management, organizations must prioritize where to start with data cleansing, and there is no one-method-fits-all approach to it. She advises to focus on cleaning the data that directly impacts the most critical business process or decision, thus ensuring quick, tangible value.


A brief summary of language model finetuning

For language models, there are two primary goals that a practitioner will have when performing fine tuning: Knowledge injection: Teach the model how to leverage new sources of knowledge (not present during pretraining) when solving problems. Alignment (or style/format specification): Modify the way in which the language model surfaces its existing knowledge base; e.g., abide by a certain answer format, use a new style/tone of voice, avoid outputting incorrect information, and more. Given this information, we might wonder: Which fine-tuning techniques should we use to accomplish either (or both) of these goals? To answer this question, we need to take a much deeper look at recent research on the topic of fine tuning. ... We don’t need tons of data to learn the style or format of output, only to learn new knowledge. When performing fine tuning, it’s very important that we know which goal—either alignment or knowledge injection—that we are aiming for. Then, we should put benchmarks in place that allow us to accurately and comprehensively assess whether that goal was accomplished or not. Imitation models failed to do this, which led to a bunch of misleading claims/results!  
 

Bridging Tech and Policy: Insights on Privacy and AI from IndiaFOSS 2024

Global communication systems are predominantly managed and governed by major technology corporations, often referred to as Big Tech. These organizations exert significant influence over how information flows across the world, yet they lack a nuanced understanding of the socio-political dynamics in the Global South. Pratik Sinha, co-founder at Alt News, spoke about how this gap in understanding can have severe consequences, particularly when it comes to issues such as misinformation, hate speech, and the spread of harmful content. ... The FOSS community is uniquely positioned to address these challenges by collaboratively developing communication systems tailored to the specific needs of various regions. Pratik suggested that by leveraging open-source principles, the FOSS community can create platforms (such as Mastodon) that empower users, enhance local governance, and foster a culture of shared responsibility in content moderation. In doing so, they can provide viable alternatives to Big Tech, ensuring that communication systems serve the diverse needs of communities rather than being controlled by a handful of corporations with a limited understanding of local complexities.


Revealing causal links in complex systems: New algorithm reveals hidden influences

In their new approach, the engineers took a page from information theory—the science of how messages are communicated through a network, based on a theory formulated by the late MIT professor emeritus Claude Shannon. The team developed an algorithm to evaluate any complex system of variables as a messaging network. "We treat the system as a network, and variables transfer information to each other in a way that can be measured," Lozano-Durán explains. "If one variable is sending messages to another, that implies it must have some influence. That's the idea of using information propagation to measure causality." The new algorithm evaluates multiple variables simultaneously, rather than taking on one pair of variables at a time, as other methods do. The algorithm defines information as the likelihood that a change in one variable will also see a change in another. This likelihood—and therefore, the information that is exchanged between variables—can get stronger or weaker as the algorithm evaluates more data of the system over time. In the end, the method generates a map of causality that shows which variables in the network are strongly linked. 


Proactive Preparation: Learning From Crowdstrike Chaos

You can’t plan for every scenario. However, having contingency plans can significantly minimise disruption if worse case scenarios occur. Clear guidance, such as knowing who to speak to about the situation and when during outages, can help financial organisations quickly identify faults in their supply chains and restore services. ... Contractual obligations with software suppliers provide an added layer of protection if issues arise. These ensure that there’s a legally binding agreement in place to ensure suppliers handle the issue effectively. Escrow agreements are also key. They protect the critical source code behind applications by keeping a current copy in escrow and can help organisations manage risk if a supplier can no longer provide software or updates. ... supply chains are complex. Software providers also rely on their own suppliers, creating an interconnected web of dependencies. Organisations in the sector should understand their suppliers’ contingency plans to handle disruptions in their wider supply chain. Knowing these plans provides peace of mind that suppliers are also prepared for disruptions and have effective steps in place to minimise any impact.


AI Drives Major Gains for Big 3 Cloud Giants

"Over the last four quarters, the market has grown by almost $16 billion, while over the previous four quarters the respective figure was $10 billion," John Dinsdale, chief analyst at Synergy Research Group, wrote in a statement. "Given the already massive size of the market, we are seeing an impressive surge in growth." ... The Azure OpenAI Service emerged as a particular bright spot, with usage more than doubling over the past six months. AI-based cloud services overall are helping Microsoft's cloud business. ... According to Pichai, Google Cloud's success is focused around five strategic areas. First, its AI infrastructure demonstrated leading performance through advances in storage, compute, and software. Second, the enterprise AI platform, Vertex, showed remarkable growth, with Gemini API calls increasing nearly 14 times over a six-month period. ... Looking ahead, AWS plans increased capital expenditure to support AI growth. "It is a really unusually large, maybe once-in-a-lifetime type of opportunity," Jassy said about the potential of generative AI. "I think our customers, the business, and our shareholders will feel good about this long term that we're aggressively pursuing it."


GreyNoise: AI’s Central Role in Detecting Security Flaws in IoT Devices

GreyNoise’s Sift is powered by large language models (LLMs) that are trained on a massive amount of internet traffic – including which targets targeting IoT devices – that can identify anomalies in the traffic that traditional system could miss, they wrote. They said Sift can spot new anomalies and threats that haven’t been identified or don’t fit the known signatures of known threats. The honeypot analyzes real-time traffic and uses the vendor’s proprietary datasets and then runs the data through AI systems to separate routine internet activity from possible threats, which whittles down what human researchers need to focus on and delivers faster and more accurate results. ... The discovery of the vulnerabilities highlights the larger security issues for an IoT environment that number 18 billion devices worldwide this year and could grow to 32.1 billion by 2030. “Industrial and critical infrastructure sectors rely on these devices for operational efficiency and real-time monitoring,” the GreyNoise researchers wrote. “However, the sheer volume of data generated makes it challenging for traditional tools to discern genuine threats from routine network traffic, leaving systems vulnerable to sophisticated attacks.”



Quote for the day:

"If you're not confused, you're not paying attention." -- Tom Peters