Daily Tech Digest - November 08, 2024

Improve Microservices With These New Load Balancing Strategies

Load balancing in a microservices setup is tricky yet crucial because it directly influences the system availability and performance level. To ensure that no single instance gets overloaded with user requests and to maintain operation even when one instance experiences issues, it is vital to distribute end-user requests among various service instances. This involves utilizing service discovery to pinpoint cases of dynamic load balancing to adjust to load changes and implementing fault-tolerant health checks for monitoring and redirecting traffic away from malfunctioned instances to maintain system stability. These tactics work together to guarantee a solid and efficient microservices setup. ... With distributed caching, intelligent load balancing, and event-driven system designs, microservices outperform today’s monolithic architectures in performance, scalability, and resilience qualities. The latter is much more efficient relative to the utilization of resources and response times since individual components can be scaled as needed. However, one must remember that the type of performance improvements introduced here means higher complexity. Implementation of the same is a complex process that needs to be monitored and optimized repeatedly. 


Achieving Net Zero: The Role Of Sustainable Design In Tech Sector

With an increasing focus on radical climate actions, environmentally responsible product design emerges as a vital tactic to achieving the net zero. According to the latest research more than two-thirds of organisations have reduced their carbon emissions as a result of the implementation of sustainable product design strategies. ... For businesses seeking to enhance sustainability it is essential to adopt a holistic approach. This means not only focusing on specific products but also examining the entire life cycle from design and packaging to end of life. It is crucial for all tech businesses to consider how sustainability can be maintained even after products and services have been purchased. Thus, enhancing product repairability is another key tactic to boost sustainability. Given that electronic waste contributes to 70% of all toxic waste and only about 12% of all e-waste is recycled properly right now, any action individual consumers can take to repair or recycle their old tech responsibility is a step toward a cleaner future. By integrating design features such as keyboard-free battery connectors and providing instructional repair videos, companies can make it easier for customers to repair their products, extending their lifespan and ultimately reducing waste.


How to Maximize DevOps Efficiency with Platform Engineering

Platform engineering can also go awry when the solutions an organization offers are difficult to deploy. In theory, deploying a solution should be as simple as clicking a button or deploying a script. But buggy deployment tools, as well as issues related to inconsistent software environments, might mean that DevOps engineers have to spend time debugging and fixing flawed platform engineering offerings — or ask the IT team to do it. In that case, a solution that was supposed to save time and simplify collaboration ends up doing the opposite. Along similar lines, platform engineering delivers little value when the solutions don't consistently align with the organization's governance and security policies. This tends to be an issue in cases where different teams implement different solutions and each team follows its own policies, instead of adhering to organization-wide rules. (It can also happen because the organization simply lacks clear and consistent security policies.) If the environments and toolchains that DevOps teams launch through platform engineering are insecure or inconsistently configured, they hamper collaboration and fail to streamline software delivery processes.


How banks can supercharge technology speed and productivity

Banks that want to increase technology productivity typically must change how engineering and business teams work together. Getting from an idea for a new customer feature to the start of coding has historically taken three to six months. First, business and product teams write a business case, secure funding, get leadership buy-in, and write requirements. Most engineers are fast at producing code once the requirements are clear, but when they must wait six months before they even write the first line, productivity stalls. Taking a page from digital-native companies, a number of top-performing banks have created joint teams of product managers and engineers. Each integrated team operates as a mini-business, with product managers functioning as mini-CEOs who help their teams work together toward quarterly objectives and key results (OKRs). With everyone collaborating in this manner, there is less need for time-consuming handoff tasks such as creating formal requirements and change requests. This way of working also unlocks greater product development speed and enables much greater responsiveness to customer needs. While most financial institutions already manage their digital and mobile teams in this product-centric way, many still use a traditional project-centric approach for the majority of their teams.


Choosing AI: the 7 categories cybersecurity decision-makers need to understand

As cybersecurity professionals, we want to avoid the missteps of the last era of digital innovation, in which large companies developed web architecture and product stacks that dramatically centralized the apparatus of function across most sectors of the global economy. The era of online platforms underwritten by just a few interlinked developer and technology infrastructure firms showed us that centralized innovation often restricts the potential for personalization for end users, which limits the benefits. ... It’s true that a CISO might want AI systems that reduce options and make their practice easier, so long as the outputs being used are trustworthy. But if the current state of development is sufficient that we should be wary of analytic products, it’s also enough for us to be downright distrustful of products that generate, extrapolate preferences, or find consensus. At present, these product styles are promising but entirely insufficient to mitigate the risks involved in adopting such unproven technology. By contrast, CISOs should think seriously about adopting AI systems that facilitate information exchange and understanding, and even about those that play a direct role in executing decisions. 


How GraphRAG Enhances LLM Accuracy and Powers Better Decision-Making

GraphRAG’s key benefit is its remarkable ability to improve LLMs’ accuracy and long-term reasoning capabilities. This is crucial because more accurate LLMs can automate increasingly complex and nuanced tasks and provide insights that fuel better decision-making. Additionally, higher-performing LLMs can be applied to a broader range of use cases, including those within sensitive industries that require a very high level of accuracy, such as healthcare and finance. That being said, human oversight is necessary as GraphRAG progresses. It’s vital that each answer or piece of information the technology produces is verifiable, and its reasoning can be traced back manually through the graph if necessary. In today’s world, success hinges on an enterprise’s ability to understand and properly leverage its data. But most organizations are swimming in hundreds of thousands of tables of data with little insight into what’s actually going on. This can lead to poor decision-making and technical debt if not addressed. Knowledge graphs are critical for helping enterprises make sense of their data, and when combined with RAG, the possibilities are endless. GraphRAG is propelling the next wave of generative AI, and organizations who understand this will be at the forefront of innovation.


Why Banks Should Rethink ‘Every Company is a Software Company’

Refocusing on core strengths can yield substantial benefits. For example, by enhancing customer experience through personalized financial advice, banks can deepen customer loyalty and foster long-term relationships. Improving risk assessment processes can lead to more accurate lending decisions and better management of financial exposures. Ensuring rigorous regulatory compliance is not only crucial for avoiding costly penalties but also for preserving a strong reputation in the market. Outsourcing software and AI development to specialized providers is a strategic opportunity that can offer significant benefits. By partnering with technology firms, banks can tap into cutting-edge advancements without bearing the heavy burden of developing and maintaining them themselves. ... AI is a powerful ally, enabling financial institutions to streamline operations, innovate faster, and stay ahead in an ever-evolving market. To achieve sustainable success, however, these institutions need to rethink their approach to software and AI investments. By focusing on core competencies and leveraging specialized providers for technological needs, these institutions can optimize their operations and achieve the results they’re looking for.


Steps Organizations Can Take to Improve Cyber Resilience

Protecting endpoints will become increasingly important as more internet-enabled devices – like laptops, smartphones, IoT hardware, tablets, etc. – hit the market. Endpoint protection is also essential for companies that embrace remote or hybrid work. By securing every possible endpoint, organizations address a common attack plane for cyberattackers. One of the fastest paths to endpoint protection is to invest in purpose-built solutions that go beyond basic antivirus software. To get ahead of cybersecurity threats, teams need real-time monitoring and threat detection capabilities. ... Cybersecurity teams should implement DNS filtering to prevent users from accessing websites that are known for hosting malicious activity. Technology solutions specifically designed for DNS filtering can also evaluate requests in real time between devices and websites before determining whether to allow the connection. Additionally, they can evaluate overall traffic patterns and user behaviors, helping IT leaders make more informed decisions about how to boost web security practices across the organization. ... Achieving cyber resilience is an ongoing process. The digital landscape changes constantly, and the best way to keep up is to make cybersecurity a focal point of everyday operations. 


The future of super apps: Decentralisation and security in a new digital ecosystem

Decentralised super apps could redefine public utility by providing essential services without private platform fees, making them accessible and affordable. This approach would serve the public interest by enabling fairer, community-driven access to essential services. For example, a decentralised grocery delivery service might allow local vendors to reach consumers without relying on platforms like Blinkit or Zepto, potentially lowering costs and supporting local businesses. As blockchain technology progresses, decentralised finance (DeFi) can also be integrated into super apps, allowing users to manage transactions securely and privately. ... Despite the potential, the path to decentralised super apps comes with challenges. Building a secure, decentralised platform requires sophisticated blockchain infrastructure, a high level of trust, and user education. Blockchain technology is still evolving, and decentralised applications (dApps) often face issues with scalability, user adoption, and regulatory scrutiny. For instance, certain countries have strict data privacy laws that could either facilitate or hinder the adoption of decentralised super apps depending on the regulatory stance towards blockchain.


Digital Transformation in Banking: Don't Let Technology Steal Your Brand

A clear, purpose-driven brand that communicates empathy, reliability, and transparency is essential to winning and retaining customer trust. Banks that invest in branding as part of their digital transformation connect with customers on a deeper level, creating bonds that withstand market fluctuations and competitive pressures. ... The focus on digital transformation has intensified competition among banks to adopt the latest technologies. While technology is essential for operational efficiency and customer convenience, it’s not the core of a bank’s identity. A bank’s brand is built on values like trust, reliability, and customer service—values that technology should reinforce, not replace. Banks need to keep a clear sight of their purpose: to serve customers’ financial well-being, empower their dreams, and create trust in every interaction. ... It’s tempting to jump on the latest tech trends to stay competitive, but each technological investment should reflect the bank’s brand values and serve customer needs. For instance, mobile banking apps, digital wallets, and AI-based financial planning tools all present opportunities to deepen brand connections.



Quote for the day:

“The final test of a leader is that he leaves behind him in other men the conviction and the will to carry on.” -- Walter Lippmann

Daily Tech Digest - November 07, 2024

Keep Learning or Keep Losing: There's No Finish Line

Traditional training and certifications are a starting point, but they're often not enough to prepare professionals for real-world challenges. Current research supports a need for cybersecurity education to be interactive, with practical approaches that deepen both engagement and understanding. ... For cybersecurity professionals, a commitment to lifelong learning is a career advantage. Those who prioritize continuous education stand out, not only because they keep pace with industry advancements but also because they demonstrate a proactive mindset valued by employers. Embracing lifelong learning positions professionals for growth, higher responsibility and leadership opportunities within their organizations. Organizations that foster a culture of continuous learning create an environment in which employees feel empowered and supported in their growth. These organizations often find they retain talent longer and perform better in crisis situations because their teams are both knowledgeable and resilient. By prioritizing ongoing education, companies can cultivate a workforce that's agile, engaged and better prepared to face cyberthreats head-on. In cybersecurity, the question isn't whether you'll keep learning - it's how you'll keep learning. 


Top 5 security mistakes software developers make

“A very common practice is the lack of or incorrect input validation,” Tanya Janca, who is writing her second book on application security and has consulted for many years on the topic, tells CSO. Snyk also has blogged about this, saying that developers need to “ensure accurate input validation and that the data is syntactically and semantically correct.” Stackhawk wrote, “always make sure that the backend input is validated and sanitized properly.” ... One aspect of lax authentication has to do with what is called “secrets sprawl,” the mistake of using hard-coded credentials in the code, including API and encryption keys and login passwords. Git Guardian tracks this issue and found that almost every breach exposing such secrets remained active for at least five days after the software’s author was notified. They found that a tenth of open-source authors leaked a secret, which amounts to bad behavior of about 1.7 million developers. ... But there is a second issue that goes to understanding security culture so you can make the right choices of tools that will actually get deployed by your developers. Jeevan Singh blogs about this issue, mentioning that you have to start small and not just go shopping for everything all at once, “so as not to overwhelm your engineering organization with huge lists of vulnerabilities. ..."


There is No Autonomous Network Without Observability

One of the best things about observability is how it strengthens network resilience. Downtime can not only damage your reputation and frustrate your customers; it is also flat-out expensive. Observability helps you spot vulnerabilities before they become major issues. With real-time insights, you can jump in and make fixes before they lead to downtime or degraded performance. Plus, observability works hand-in-hand with AI-driven assurance systems. By constantly monitoring performance, these systems diligently look for patterns that might hint at future problems. They can make proactive adjustments, which cut down on the need for manual intervention. The result? A network that is more self-reliant, adaptive, and able to keep running smoothly. Observability doesn’t just stop there—it also steps up your security game. With threat detection built into every layer of the network, observability helps your network identify and deal with security issues in real time, making it not just self-healing but self-securing. ... Today’s networks are not confined to one domain anymore. We are working with multi-domain networks that tie together radio, transport, and cloud technologies. That creates a massive amount of data, and managing that data in real time is a challenge. 


Building a better future: The enterprise architect’s role in leading organizational transformation

Architects bring unique capabilities that make them well-suited for leadership roles in an evolving business landscape. Their core strength lies in aligning technology with business goals. This keeps innovation and growth interconnected. Unlike traditional executives, architects have a holistic view of both domains, allowing them to see the big picture and drive meaningful change. With deep technical expertise, architects can navigate complex systems, platforms, and infrastructures. But their strategic thinking sets them apart—they don’t just focus on technology in isolation. They understand how it drives business value, enabling them to make informed decisions that benefit both the organization and its customers. Moreover, architects are natural collaborators. They excel at bridging gaps between different business units, fostering cross-functional teams, and ensuring integrated solutions that work for the entire organization. This ability to collaborate across departments makes them ideal for leadership in a world that values adaptability, inclusivity, and alignment over rigid command structures. The shift from a ‘command and control’ leadership mode to one of ‘align and collaborate’ is transforming how organizations are managed. 


How ‘Cheap Fakes’ Exploit Our Psychological Vulnerabilities

Cheap fakes exploit a range of psychological vulnerabilities, like fear, greed, and curiosity. These vulnerabilities make social engineering attacks prevalent across the board -- over two-thirds of data breaches involve a human element -- but cheap fakes are particularly effective at leveraging them. This is because many people are unable to identify manipulated media, particularly when it aligns with their preconceptions and existing biases. According to a study published in Science, false news spreads much faster than accurate information on social media. Researchers found several explanations for this phenomenon: false news tends to be more novel than the truth, and the stories elicited “fear, disgust, and surprise in replies.” Cheap fakes rely on these emotions to spread quickly and capture victims’ attention -- they create inflammatory imagery, aim to increase political and social division, and often present fragments of authentic content to produce the illusion of legitimacy. At a time when cheap fakes and deepfakes are rapidly proliferating, IT teams must emphasize a core principle of cybersecurity: Verify before you trust. Employees should be taught to doubt their initial reactions to digital content, particularly when that content is sensational, coercive, or divisive.... 


Cloud vs. On-Prem: Comparing Long-Term Costs

You’ve seen many reports of companies saving millions of dollars by moving a portion or majority of their workloads out of the cloud. When leaving the cloud becomes financially viable, the price point will depend on your workload, business requirements, and other factors, but here are some basic guidelines to consider. Big cloud providers have historically made moving all your data out of their cloud cost-prohibitive. Saving millions of dollars on computing will not make sense if it costs millions to move your data. ... You would have to reduce your cloud spend by 90-96% to save as much money as buying hardware. Reserved instances and spots may save money, but never that much. Budgeting hardware and collocation space will be easier to engineer and more predictable for your long-term projected spending. Spending this much money also means you are likely continuously upgrading based on your cloud provider’s upgrade requirements. You will frequently upgrade operating systems, database versions, Kubernetes clusters, and serverless runtimes. And you have no agency to delay them until it works best for your business. But saving people’s costs isn’t the only benefit. A frequent phrase when using the cloud is “opportunity cost.” 


Data Center Regulation Trends to Watch in 2025

Governments are increasingly focused on creating new or updated regulations to strengthen digital resiliency and cybersecurity because of the growing importance of IT in critical services, rising geopolitical tensions, explosion of cyberattacks and increased outsourcing to cloud, according to the Uptime Institute. EU’s DORA requires the finance industry to establish a risk management framework, which includes business continuity and disaster recovery plans that include data backup and recovery; incident reporting; digital operational resilience testing; information sharing of cyber threats with other financial institutions; and managing the risk of their third-party information and communications technology (ICT) providers, such as cloud providers. “You’ve got to make sure your data center is robust, resilient, and that it doesn’t go down. And if it does go down, you’re responsible for it,” said Rahiel Nasir, IDC’s associate research director of European Cloud and lead analyst of worldwide digital sovereignty. Financial businesses will have to ensure their third-party providers meet regulatory requirements by negotiating it into their contracts. As a result, both the finance sector and their service providers will need to implement the tools and procedures necessary to comply with DORA, an IDC report said.


How AI will shape the next generation of cyber threats

In essence, AI turns advanced attack strategies into point-and-click operations, removing the need for deep technical knowledge. Attackers won’t need to write custom code or conduct in-depth research to exploit vulnerabilities. Instead, AI systems will analyze target environments, find weaknesses and even adapt attack patterns in real time without requiring much input from the user. This shift greatly widens the pool of potential attackers. Organizations that have traditionally focused on defending against nation-state actors and professional hacker groups will now have to contend with a much broader range of threats. Eventually, AI will empower individuals with limited tech knowledge to execute attacks rivaling those of today’s most advanced adversaries. To stay ahead, defenders must match this acceleration with AI-powered defenses that can predict, detect and neutralize threats before they escalate. In this new environment, success will depend not just on reacting to attacks but on anticipating them. Organizations will need to adopt predictive AI capabilities that can evolve alongside the rapidly shifting threat landscape, staying one step ahead of attackers who now have unprecedented power at their fingertips.


Navigating Privacy and Ethics in the Military use of AI

The report articulates the importance of integrating data governance into the development and deployment of military AI systems, and stresses that as military AI becomes increasingly central to national defense, so too does the need for clear, ethical, and transparent practices surrounding the data used to train these systems. “Data plays a critical role in the training, testing, and use of artificial intelligence, including in the military domain,” the report says, emphasizing that “research and development for AI-enabled military solutions is proceeding at breakneck speed” and therefore “the important role data plays in shaping these technologies have implications and, at times, raises concerns.” The report says “these issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning),” and points out that “pathways and governance solutions to address these issues remain scarce and very much underexplored.” Afina and Sarah Grand-Clément said the risk of data breaches or unauthorized access to military data also is a critical concern. 


AI in Cybersecurity: Balancing Innovation with Risk

Generative AI has advanced to a point where it can produce unique, grammatically sound, and contextually relevant content. Cybercriminals utilise this technology to create convincing phishing emails, text messages, and other forms of communication that mimic legitimate interactions. Unlike traditional phishing attempts, which often exhibit suspicious language or grammatical errors, AI-generated content can evade detection and manipulate targets more effectively. Furthermore, AI can produce deepfake videos or audio recordings that convincingly impersonate trusted individuals, increasing the likelihood of successful scams. ... AI, particularly Machine Learning (ML) and deep learning, can be instrumental in detecting suspicious activities and identifying abnormal patterns in network traffic. AI can establish a baseline of normal behavior by analysing vast datasets, including traffic trends, application usage, browsing habits, and other network activity. This baseline can serve as a guide for spotting anomalies and potential threats. AI’s ability to process large volumes of data in real-time means it can flag suspicious activities faster and more accurately, enabling immediate remediation and minimising the chances of a successful cyberattack. 



Quote for the day:

“It’s better to look ahead and prepare, than to look back and regret.” -- Jackie Joyner Kersee

Daily Tech Digest - November 06, 2024

Enter the ‘Whisperverse’: How AI voice agents will guide us through our days

Within the next few years, an AI-powered voice will burrow into your ears and take up residence inside your head. It will do this by whispering guidance to you throughout your day, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot and prompting you with the name of a coworker you pass in the hall. It may even coach you as you hold conversations with friends and coworkers, or when out on dates, give you interesting things to say that make you seem smarter, funnier and more charming than you really are. ... Most of these devices will be deployed as AI-powered glasses because that form-factor gives the best vantage point for cameras to monitor our field of view, although camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world. Also, sensored glasses and earbuds will allow us to respond silently to our AI assistants with simple head nod gestures of agreement or rejection, as we naturally do with other people. ... On the other hand, deploying intelligent systems that whisper in your ears as you go about your life could easily be abused as a dangerous form of targeted influence.


How to Optimize Last-Mile Delivery in the Age of AI

Technology is at the heart of all advancements in last-mile delivery. For instance, a typical map application gives the longitude and latitude of a building — its location — and a central access point. That isn't enough data when it comes to deliveries. In addition to how much time it takes to drive or walk from point A to point B, it's also essential for a driver to understand what to do at point B. At an apartment complex, for example, they need to know what units are in each building and on which level, whether to use a front, back, or side entrance, how to navigate restricted or gated areas, and how to access parking and loading docks or package lockers. Before GenAI, third-party vendors usually acquired this data, sold it to companies, and applied it to map applications and routing algorithms to provide delivery estimates and instructions. Now, companies can use GenAI in-house to optimize routes and create solutions to delivery obstacles. Suppose the data surrounding an apartment complex is ambiguous or unclear. For instance, there may be conflicting delivery instructions — one transporter used a drop-off area, and another used a front door. Or perhaps one customer was satisfied with their delivery, but another parcel delivered to the same location was damaged or stolen. 


Cloud providers make bank with genAI while projects fail

Poor data quality is a central factor contributing to project failures. As companies venture into more complex AI applications, the demand for tailored, high-quality data sets has exposed deficiencies in existing enterprise data. Although most enterprises understood that their data could have been better, they haven’t known how bad. For years, enterprises have been kicking the data can down the road, unwilling to fix it, while technical debt gathered. AI requires excellent, accurate data that many enterprises don’t have—at least, not without putting in a great deal of work. This is why many enterprises are giving up on generative AI. The data problems are too expensive to fix, and many CIOs who know what’s good for their careers don’t want to take it on. The intricacies in labeling, cleaning, and updating data to maintain its relevance for training models have become increasingly challenging, underscoring another layer of complexity that organizations must navigate. ... The disparity between the potential and practicality of generative AI projects is leading to cautious optimism and reevaluations of AI strategies. This pushes organizations to carefully assess the foundational elements necessary for AI success, including robust data governance and strategic planning—all things that enterprises are considering too expensive and too risky to deploy just to make AI work.


Why cybersecurity needs a better model for handling OSS vulnerabilities

Identifying vulnerabilities and navigating vulnerability databases is of course only part of the dependency problem; the real work lies in remediating identified vulnerabilities impacting systems and software. Aside from general bandwidth challenges and competing priorities among development teams, vulnerability management also suffers from challenges around remediation, such as the real potential that implementing changes and updates can potentially impact functionality or cause business disruptions. ... Reachability analysis “offers a significant reduction in remediation costs because it lowers the number of remediation activities by an average of 90.5% (with a range of approximately 76–94%), making it by far the most valuable single noise-reduction strategy available,” according to the Endor report. While the security industry can beat the secure-by-design drum until they’re blue in the face and try to shame organizations into sufficiently prioritizing security, the reality is that our best bet is having organizations focus on risks that actually matter. ... In a world of competing interests, with organizations rightfully focused on business priorities such as speed to market, feature velocity, revenue and more, having developers quit wasting time and focus on the 2% of vulnerabilities that truly present risks to their organizations would be monumental.


The new calling of CIOs: Be the moral arbiter of change

Unfortunately, establishing a strategy for democratizing innovation through gen AI is far from straightforward. Many factors, including governance, security, ethics, and funding, are important, and it’s hard to establish ground rules. ... What’s clear is tech-led innovation is no longer the sole preserve of the IT department. Fifteen years ago, IT was often a solution searching for a problem. CIOs bought technology systems, and the rest of the business was expected to put them to good use. Today, CIOs and their teams speak with their peers about their key challenges and suggest potential solutions. But gen AI, like cloud computing before it, has also made it much easier for users to source digital solutions independently of the IT team. That high level of democratization doesn’t come without risks, and that’s where CIOs, as the guardians of enterprise technology, play a crucial role. IT leaders understand the pain points around governance, implementation, and security. Their awareness means responsibility for AI, and other emerging technologies have become part of a digital leader’s ever-widening role, says Rahul Todkar, head of data and AI at travel specialist Tripadvisor.


5 Strategies For Becoming A Purpose-Driven Leader

Purpose-driven leaders are fueled by more than sheer ambition; they are driven by a commitment to make a meaningful impact. They inspire those around them to pursue a shared purpose each day. This approach is especially powerful in today’s workforce, where 70% of employees say their sense of purpose is closely tied to their work, according to a recent report by McKinsey. Becoming a purpose-driven leader requires clarity, strategic foresight, and a commitment to values that go beyond the bottom line. ... Aligning your values with your leadership style and organizational goals is essential for authentic leadership. “Once you have a firm grasp of your personal values, you can align them with your leadership style and organizational goals. This alignment is crucial for maintaining authenticity and ensuring that your decisions reflect your deeper sense of purpose,” Blackburn explains. ... Purpose-driven leaders embody the values and behaviors they wish to see reflected in their teams. Whether through ethical decision-making, transparency, or resilience in the face of challenges, purpose-driven leaders set the tone for how others in the organization should act. By aligning words with actions, leaders build credibility and trust, which are the foundations of sustainable success.


Chaos Engineering: The key to building resilient systems for seamless operations

The underlying philosophy of Chaos Engineering is to encourage building systems that are resilient to failures. This means incorporating redundancy into system pathways, so that the failure of one path does not disrupt the entire service. Additionally, self-healing mechanisms can be developed such as automated systems that detect and respond to failures without the need for human intervention. These measures help ensure that systems can recover quickly from failures, reducing the likelihood of long-lasting disruptions. To effectively implement Chaos Engineering and avoid incidents like the payments outage, organisations can start by formulating hypotheses about potential system weaknesses and failure points. They can then design chaos experiments that safely simulate these failures in controlled environments. Tools such as Chaos Monkey, Gremlin, or Litmus can automate the process of failure injection and monitoring, enabling engineers to observe system behaviour in response to simulated disruptions. By collecting and analysing data from these experiments, organisations can learn from the failures and use these insights to improve system resilience. This process should be iterative, and organisations should continuously run new experiments and refine their systems based on the results.


Shifting left with telemetry pipelines: The future of data tiering at petabyte scale

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past. ... As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels based on its value and use case, enabling organizations to optimize both cost and performance.


A Transformative Journey: Powering the Future with Data, AI, and Collaboration

The advancements in industrial data platforms and contextualization have been nothing short of remarkable. By making sense of data from different systems—whether through 3D models, images, or engineering diagrams—Cognite is enabling companies to build a powerful industrial knowledge graph, which can be used by AI to solve complex problems faster and more effectively than ever before. This new era of human-centric AI is not about replacing humans but enhancing their capabilities, giving them the tools to make better decisions, faster. Without the buy in from the people who will be affected by any new innovation or technology the probability of success is unlikely. Engaging these individuals early on in the process to solve the issues they find challenging, mundane, or highly repetitive, is critical to driving adoption and creating internal champions to further catalyze adoption. In a fascinating case study shared by one of Cognite’s partners, we learned about the transformative potential of data and AI in the chemical manufacturing sector. A plant operator described how the implementation of mobile devices powered by Cognite’s platform has drastically improved operational efficiency. 


Four Steps to Balance Agility and Security in DevSecOps

Tools like OWASP ZAP and Burp Suite can be integrated into continuous integration/continuous delivery (CI/CD) pipelines to automate security testing. For example, LinkedIn uses Ansible to automate its infrastructure provisioning, which reduces deployment times by 75%. By automating security checks, LinkedIn ensures that its rapid delivery processes remain secure. Automating security not only enhances speed but also improves the overall quality of software by catching issues before they reach production. Automated tools can perform static code analysis, vulnerability scanning and penetration testing without disrupting the development cycle, helping teams deploy secure software faster. ... As organizations look to the future, artificial intelligence (AI) and machine learning (ML) will play a crucial role in enhancing both security and agility. AI-driven security tools can predict potential vulnerabilities, automate incident response and even self-heal systems without human intervention. This not only improves security but also reduces the time spent on manual security reviews. AI-powered tools can analyze massive amounts of data, identifying patterns and potential threats that human teams may overlook. This can reduce downtime and the risk of cyberattacks, ultimately allowing organizations to deploy faster and more securely.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss

Daily Tech Digest - November 05, 2024

GenAI in healthcare: The state of affairs in India

Currently, the All-India Institute of Medical Sciences (AIIMS) Delhi is the only public healthcare institution exploring AI-driven solutions. AIIMS, in collaboration with the Ministry of Electronics & Information Technology and the Centre for Development of Advanced Computing (C-DAC) Pune, launched the iOncology.ai platform to support oncologists in making informed cancer treatment decisions. The platform uses deep learning models to detect early-stage ovarian cancer, and available data shows this has already improved patient outcomes while reducing healthcare costs. This is one of the few key AI-driven initiatives in India. Although AI adoption in the healthcare provider segment is relatively high at 68%, a large portion of deployments are still in the PoC phase. What could transform India’s healthcare with Generative AI? What could help bring care to those who need it most? ... India has tremendous potential in machine intelligence, especially as we develop our own Gen AI capabilities. In healthcare, however, the pace of progress is hindered by financial constraints and a shortage of specialists in the field. Concerns over data breaches and cybersecurity incidents also contribute to this aversion. 


OWASP Beefs Up GenAI Security Guidance Amid Growing Deepfakes

To help organizations develop stronger defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a trio of guidance documents for security organizations on Oct. 31. To its previously released AI cybersecurity and governance checklist, the group added a guide for preparing for deepfake events, a framework to create AI security centers of excellence, and a curated database on AI security solutions. ... The trajectory of deepfakes is quite easy to predict — even if they are not good enough to fool most people today, they will be in the future, says Eyal Benishti, founder and CEO of Ironscales. That means that human training will likely only go so far. AI videos are getting eerily realistic, and a fully digital twin of another person controlled in real time by an attacker — a true "sock puppet" — is likely not far behind. "Companies want to try and figure out how they get ready for deepfakes," he says. "The are realizing that this type of communication cannot be fully trusted moving forward, which ... will take people some time to realize and adjust." In the future, since the telltale artifacts will be gone, better defenses are necessary, Exabeam's Kirkwood says.


Open-source software: A first attempt at organization after CRA

The Cyber Resilience Act was a shock that awakened many people from their comfort zone: How dare the “technical” representatives of the European Union question the security of open-source software? The answer is very simple: because we never told them, and they assumed it was because no one was concerned about security. ... The CRA requires software with automatic updates to roll out security updates automatically by default, while allowing users to opt out.  Companies must conduct a cyber risk assessment before a product is released and throughout 10 years or its expected lifecycle, and must notify the EU cybersecurity agency ENISA of any incidents within 24 hours of becoming aware of them, as well as take measures to resolve them. In addition to that, software products must carry the CE marking to show that they meet a minimum level of cybersecurity checks. Open-source stewards will have to care about the security of their products but will not be asked to follow these rules. In exchange, they will have to improve the communication and sharing of best security practices, which are already in place, although they have not always been shared. So, the first action was to create a project to standardize them, for the entire open-source software industry.


10 ways hackers will use machine learning to launch attacks

Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, a former EY partner. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.” These services are specifically being advertised as using machine learning, and it’s probably not just marketing. “The proof is in the pudding,” Malone says. “They’re definitely better.” ... Criminals are also using machine learning to get better at guessing passwords. “We’ve seen evidence of that based on the frequency and success rates of password guessing engines,” Malone says. Criminals are building better dictionaries to hack stolen hashes. They’re also using machine learning to identify security controls, “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.” ... The most frightening use of artificial intelligence are the deep fake tools that can generate video or audio that is hard to distinguish from the real human. “Being able to simulate someone’s voice or face is very useful against humans,” says Montenegro.


Breaking Free From the Dead Zone: Automating DevOps Shifts for Scalable Success

If ‘Shift Left’ is all about integrating processes closer to the source code, ‘Shift Right’ offers a complementary approach by tackling challenges that arise after deployment. Some decisions simply can’t be made early in the development process. For example, which cloud instances should you use? How many replicas of a service are necessary? What CPU and memory allocations are appropriate for specific workloads? These are classic ‘Shift Right’ concerns that have traditionally been managed through observability and system-generated recommendations. Consider this common scenario: when deploying a workload to Kubernetes, DevOps engineers often guess the memory and CPU requests, specifying these in YAML configuration files before anything is deployed. But without extensive testing, how can an engineer know the optimal settings? Most teams don’t have the resources to thoroughly test every workload, so they make educated guesses. Later, once the workload has been running in production and actual usage data is available, engineers revisit the configurations. They adjust settings to eliminate waste or boost performance, depending on what’s needed. It’s exhausting work and, let’s be honest, not much fun.


5 cloud market trends and how they will impact IT

“Capacity growth will be driven increasingly by the even larger scale of those newly opened data centers, with generative AI technology being a prime reason for that increased scale,” Synergy Research writes. Not surprisingly, the companies with the broadest data center footprint are Amazon, Microsoft, and Google, which account for 60% of all hyperscale data center capacity. And the announcements from the Big 3 are coming fast and furious. ... “In effect, industry cloud platforms turn a cloud platform into a business platform, enabling an existing technology innovation tool to also serve as a business innovation tool,” says Gartner analyst Gregor Petri. “They do so not as predefined, one-off, vertical SaaS solutions, but rather as modular, composable platforms supported by a catalog of industry-specific packaged business capabilities.” ... There are many reasons for cloud bills increasing, beyond simple price hikes. Linthicum says organizations that simply “lifted and shifted” legacy applications to the public cloud, rather than refactoring or rewriting them for cloud optimization, ended up with higher costs. Many organizations overprovisioned and neglected to track cloud resource utilization. On top of that, organizations are constantly expanding their cloud footprint.


The Modern Era of Data Orchestration: From Data Fragmentation to Collaboration

Data systems have always needed to make assumptions about file, memory, and table formats, but in most cases, they've been hidden deep within their implementations. A narrow API for interacting with a data warehouse or data service vendor makes for clean product design, but it does not maximize the choices available to end users. ... In a closed system, the data warehouse maintains its own table structure and query engine internally. This is a one-size-fits-all approach that makes it easy to get started but can be difficult to scale to new business requirements. Lock-in can be hard to avoid, especially when it comes to capabilities like governance and other services that access the data. Cloud providers offer seamless and efficient integrations within their ecosystems because their internal data format is consistent, but this may close the door on adopting better offerings outside that environment. Exporting to an external provider instead requires maintaining connectors purpose-built for the warehouse's proprietary APIs, and it can lead to data sprawl across systems. ... An open, deconstructed system standardizes its lowest-level details. This allows businesses to pick and choose the best vendor for a service while having the seamless experience that was previously only possible in a closed ecosystem.


New OAIC AI Guidance Sharpens Privacy Act Rules, Applies to All Organizations

The new AI guidance outlines five key takeaways that require attention, and though the term “guidance” is used some of these constitute expansions of application of existing rules. The first of these is that Privacy Act requirements for personal information apply to AI systems, both in terms of user input and what the system outputs. ... The second AI guidance takeaway stipulates that privacy policies must be updated to have “clear and transparent” information about public-facing AI use. The third takeaway notes that the generation of images of real people, whether it be due to a hallucination or intentional creation of something like a deepfake, are also covered by personal information privacy rules. The fourth AI guidance takeaway states that any personal information input into AI systems can only be used for the primary purpose for which it was collected, unless consent is collected for other uses or those secondary uses can be reasonably expected to be necessary. The fifth and final takeaway is perhaps a case of burying the lede; the OAIC simply suggests that organizations not collect personal information through AI systems at all due to the ” significant and complex privacy risks involved.”


DevOps Moves Beyond Automation to Tackle New Challenges

“The future of DevOps is DevSecOps,” Jonathan Singer, senior product marketing manager at Checkmarx, told The New Stack. “Developers need to consider high-performing code as secure code. Everything is code now, and if it’s not secure, it can’t be high-performing,” he added. Checkmarx is an application security vendor that allows enterprises to secure their applications from the first line of code to deployment in the cloud, Singer said. The DevOps perspective has to be the same as the application security perspective, he noted. Some people think of seeing the environment around the app, but Checkmarx thinks of seeing the code in the application and making sure it’s safe and secure when it’s deployed, he added. “It might look like the security teams are giving more responsibility to the dev teams, and therefore you need security people in the dev team,” Singer said Checkmarx is automating the heavy mental lifting by prioritizing and triaging scan results. With the amount of code, especially for large organizations, finding ten thousand vulnerabilities is fairly common, but they will have different levels of severity. If a vulnerability is not exploitable, you can knock it out of the results list. “Now we’re in the noise reduction game,” he said.


How Quantum Machine Learning Works

While quantum computing is not the most imminent trend data scientists need to worry about today, its effect on machine learning is likely to be transformative. “The really obvious advantage of quantum computing is the ability to deal with really enormous amounts of data that we can't really deal with any other way,” says Fitzsimons. “We've seen the power of conventional computers has doubled effectively every 18 months with Moore's Law. With quantum computing, the number of qubits is doubling about every eight to nine months. Every time you add a single qubit to a system, you double its computational capacity for machine learning problems and things like this, so the computational capacity of these systems is growing double exponentially.” ... Quantum-inspired software techniques can also be used to improve classical ML, such as tensor networks that can describe machine learning structures and improve computational bottlenecks to increase the efficiency of LLMs like ChatGPT. “It’s a different paradigm, entirely based on the rules of quantum mechanics. It’s a new way of processing information, and new operations are allowed that contradict common intuition from traditional data science,” says Orús.



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - November 04, 2024

How AI Is Driving Data Center Transformation - Part 3

According to AFCOM's 2024 State of Data Center Report, AI is already having a major influence on data center design and infrastructure. Global hyperscalers and data center service providers are increasing their capacity to support AI workloads. This has a direct impact on power and cooling requirements. In terms of power, the average rack density is expected to rise from 8.5 kW per rack in 2023 to 12 kW per rack by the end of 2024, with 55% of respondents expecting higher rack density in the next 12 to 36 months. As GPUs are fitted into these racks, servers will generate more heat, increasing both power and cooling requirements. The optimal temperature for operating a data center hall is between 21 and 24°C (69.8 - 75.2°F), which means that any increase in rack density must be accompanied by improvements in cooling capabilities. ... The efficiency of a data center is measured by a metric called power usage efficiency, PUE, which is the ratio of the total amount of power used by a data center to the power used by its computing equipment. To be more efficient, data center providers aim to reduce their PUE rating and bring it closer to 1. A way to achieve that is to reduce the power consumed by the cooling units through advanced cooling technologies.


The Intellectual Property Risks of GenAI

Boards and C-suites that have not yet had discussions about the potential risks of GenAI need to start now. “Employees can use and abuse generative AI even when it is not available to them as an official company tool. It can be really tempting for a junior employee to rely on ChatGPT to help them draft formal-sounding emails, generate creative art for a PowerPoint presentation and the like. Similarly, some employees might find it too tempting to use their phone to query a chatbot regarding questions that would otherwise require intense research,” says Banner Witcoff’s Sigmon. “Since such uses don’t necessarily make themselves obvious, you can’t really figure out if, for example, an employee used generative AI to write an email, much less if they provided confidential information when doing so. This means that companies can be exposed to AI-related risk even when, on an official level, they may not have adopted any AI.” ... “As is the case with the use of technology within any large organization, successful implementation involves a careful and specific evaluation of the tech, the context of use, and its wider implications including intellectual property frameworks, regulatory frameworks, trust, ethics and compliance,” says Raeburn in an email interview. 


The 10x Developer vs. AI: Will Tech’s Elite Coder Be Replaced?

We’re seeing AI tools that can smash out complex coding tasks in minutes and take even your best senior devs’ hours. At Cosine, we’ve seen this firsthand with our AI, Genie. Many of the tasks we tested were in the four to six-hour range, and Genie could complete them in four to six minutes. It’s a genuine superhuman thing to be able to solve problems that quickly. But here’s where it gets interesting. This isn’t just about raw output. The real mind-bender is that AI is starting to think like an engineer. It’s not just spitting out code — it’s solving problems. ... Suppose we’re looking slightly more pragmatically at what AI could signal for career progression. In that case, there is a counterargument that junior developers won’t be exposed to the same level of problem-solving or acquire the same skill sets, given the availability of AI. This creates a complete headache for HR. How do you structure career progression when the traditional markers of seniority — years of experience, deep technical knowledge — might not mean as much? I think we’ll see a shift in focus. Companies will probably lean more on whether you fulfilled your sprint objectives and shipped what you wanted on time instead of going deeper. As for the companies themselves? Those who don’t get on board with AI coding tools will get left in the dust.


The 5 gears of employee well-being

Ritika is of view that managing employees’ and organisational expectations requires clear communication from the leadership. “It offers employees a transparent view of the organisation's direction and highlights how their contributions drive Amway's success and growth. Our leadership prioritises transparency, ensuring that employees have a clear understanding of the organisation’s direction and how their individual and collaborative efforts contribute to collective goals. This approach fosters a strong sense of purpose and engagement while aligning with the vision and desired culture of the company.” She further calls for having a robust feedback mechanism that allows employees an opportunity to share their honest feedback on areas that matter the most and the ones that impact them. “We believe in the feedback flywheel, our bi-annual culture and employee engagement survey allow employees an opportunity to share feedback. Each feedback is followed by a cycle of sharing results and action planning.” She further adds that frequent check-in conversations between the upline and team members ensure there is clarity of expectations, our performance management system ensures there are 3 formal check-in conversations that are focused on coaching and development and not ‘judgement’. 


Agentic AI swarms are headed your way

OpenAI launched an experimental framework last month called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI. Swarm is not a product. It’s an experimental tool for coordinating or orchestrating networks of AI agents. The framework is open-source under the MIT license, and available on GitHub. ... One way to look at agentic AI swarming technology is that it’s the next powerful phase in the evolution of generative AI (genAI). In fact, Swarm is built on OpenAI’s Chat Completions API, which uses LLMs like GPT-4. The API is designed to facilitate interactive “conversations” with AI models. It allows developers to create chatbots, interactive agents, and other applications that can engage in natural language conversations. Today, developers are creating what you might call one-off AI tools that do one specific task. Agentic AI would enable developers to create a large number of such tools that specialize in different specific tasks, and then enable each tool to dragoon any others into service if the agent decides the task would be better handled by the other kind of tool.


How To Develop Emerging Leaders In Your Organization

Mentorship and coaching are critical for unlocking the leadership potential of emerging talent. By pairing less experienced employees with seasoned leaders, companies provide invaluable hands-on learning experiences beyond formal training programs. These relationships allow future leaders to observe high-level decision-making in action, receive personalized feedback, and cultivate their leadership instincts in real-world scenarios. ... While technical skills are essential, leadership success depends heavily on soft skills like emotional intelligence, communication, and adaptability. These skills help leaders navigate team dynamics, inspire trust, and handle organizational challenges with confidence. Workshops, problem-solving exercises, and leadership programs are effective for developing these abilities. ... Leadership development can’t happen in a vacuum. One of the most effective ways to accelerate growth is through “stretch assignments,” opportunities that push employees beyond their comfort zones by challenging them with responsibilities that test their leadership abilities. These assignments expose future leaders to high-stakes decision-making, cross-functional collaboration, and strategic thinking, all of which prepare them for the demands of more senior roles.


CIOs look to sharpen AI governance despite uncertainties

There is no dearth of AI governance frameworks available from the US government and European Union, as well as top market researchers, but no doubt, as gen AI innovation outpaces formal standards, CIOs will need to enact and hone internal AI governance policies in 2025 — and enlist the entire C-suite in the process to ensure they are not on the hook alone, observers say. ... “Governance is really about listening and learning from each other as we all care about the outcome, but equally as important, howwe get to the outcome itself,” Williams says. “Once you cross that bridge, you can quickly pivot into AI tools and the actual projects themselves, which is much easier to maneuver.” TruStone Financial Credit Union is also grappling with establishing a comprehensive AI governance program as AI innovation booms. “New generative AI platforms and capabilities are emerging every week. When we discover them, we block access until we can thoroughly evaluate the effectiveness of our controls,” says Gary Jeter, EVP and CTO at TruStone, noting, as an example, that he decided to block access to Google’s NotebookLM initially to assess its safety. Like many enterprises, TruStone has deployed a companywide generative AI platform for policies and procedures branded as TruAssist.


Design strategies in the white space ecosystem

AI compute cabinets can weigh up to 4,800 pounds, raising concerns about floor load capacity. Raised floors offer flexibility for cabling, cooling, and power management but may struggle with the weight demands of high-density setups. Slab floors are sturdier but come with their own design and cost challenges, particularly for liquid cooling, which can pose risks if leaks occur. This isn’t just a financial concern – it’s also about safety. “As we integrate various trades and systems into the same space with multiple teams working alongside each other, safety becomes paramount. Proper structural load assessments and seismic bracing, especially in earthquake-prone areas, are essential to ensure the raised floor can handle the weight,” Willis emphasizes. ... As the landscape of high-performance computing continues to grow and evolve, so too do the designs of data center cabinets. These changes are driven by the need for deeper and wider cabinets that can support a greater number of power distribution units (PDUs) and cabling. The emphasis is not just on accommodating equipment, but also on optimizing space and power capacity to avoid the network distance limitations that can arise when cabinets become too wide.


Costly and struggling: the challenges of legacy SIEM solutions

The main problem organizations face with legacy SIEM systems is the massive amount of unstructured data they produce, making it hard to spot signs of advanced threats such as ransomware and advanced persistent threat groups. “These systems were built primarily to detect known threats using signature-based approaches, which are insufficient against today’s sophisticated, constantly evolving attack techniques,” Young says. “Modern threats often employ subtle tactics that require advanced analytics, behavior-based detection, and proactive correlation across multiple data sources — capabilities that many legacy SIEMs lack. In addition, legacy SIEM systems typically don’t support automated threat intelligence feeds, which are crucial for staying ahead of emerging threats, according to Young. “They also lack the ability to integrate with security orchestration, automation, and response tools, which help automate responses and streamline incident management.” Without these modern features, legacy SIEMs often miss important warning signs of attacks and have trouble connecting different threat signals, making organizations more exposed to complex, multi-stage attacks. Mellen says SIEMS are only as good as the work that companies put into them, which is the predominant feedback she’s received over the years from many practitioners.


Why Effective Fraud Prevention Requires Contact Data Quality Technology

From our experience the quality of contact data is essential to the effectiveness of ID processes, influencing everything from end-to-end fraud prevention to delivering simple ID checks; meaning more advanced and costly techniques, like biometrics and liveness authentication, may not be necessary. The verification process becomes more reliable when a customer’s contact information, such as name, address, email and phone number, are accurate. With this data ID verification technology can then confidently cross-reference the provided information against official databases or other authoritative sources, without discrepancies that could lead to false positives or negatives. A growing issue is fraudsters exploiting inaccuracies in contact data to create false identities and manipulate existing ones. By maintaining clean and accurate contact data ID verification systems can more effectively detect suspicious activity and prevent fraud. For example, inconsistencies in a user’s phone or email, or an address linked to multiple identities, could serve as a red flag for additional scrutiny.



Quote for the day:

“Disagree and commit is a really important principle that saves a lot of arguing.” -- Jeff Bezos