Daily Tech Digest - December 28, 2024

Forcing the SOC to change its approach to detection

Make no mistake, we are not talking about the application of AI in the usual sense when it comes to threat detection. Up until now, AI has seen Large Language Models (LLMs) used to do little more than summarise findings for reporting purposes in incident response. Instead, we are referring to the application of AI in its truer and broader sense, i.e. via machine learning, agents, graphs, hypergraphs and other approaches – and these promise to make detection both more precise and intelligible. Hypergraphs gives us the power to connect hundreds of observations together to form likely chains of events. ... The end result is that the security analyst is no longer perpetually caught in firefighting mode. Rather than having to respond to hundreds of alerts a day, the analyst can use the hypergraphs and AI to detect and string together long chains of alerts that share commonalities and in so doing gain a complete picture of the threat. Realistically, it’s expected that adopting such an approach should see alert volumes decline by up to 90 per cent. But it doesn’t end there. By applying machine learning to the chains of events it will be possible to prioritise response, identifying which threats require immediate triage. 


Sole Source vs. Single Source Vendor Management

A Sole source is a vendor that provides a specific product or service to your company. This vendor makes a specific widget or service that is custom tailored to your company’s needs. If there is an event at this Sole Source provider, your company can only wait until the event has been resolved. There is no other vendor that can produce your product or service quickly. They are the sole source, on a critical path to your operations. From an oversight and assessment perspective, this can be a difficult relationship to mitigate risks to your company. With sole source companies, we as practitioners must do a deeper dive into these companies from a risk assessment perspective. From a vendor audit perspective, we need to go into more details of how robust their business continuity, disaster recovery, and crisis management programs are. ... Single Source providers are vendors that provide a service or product to your company that is one company that you choose to do business with, but there are other providers that could provide the same product or services. An example of a single source provider is a payment processing company. There are many to choose from, but you chose one specific company to do business with. Moving to a new single source provider can be a daunting task that involves a new RFP process, process integration, assessments of their business continuity program, etc. 


Central Africa needs traction on financial inclusion to advance economic growth

Beyond the infrastructure, financial inclusion would see a leap forward in CEMAC if the right policies and platforms exist. “The number two thing is that you have to have the right policies in place which are going to establish what would constitute acceptable identity authentication for identity transactions. So, be it for onboarding or identity transactions, you have to have a policy. Saying that we’re going to do biometric authentication for every transaction, no matter what value it is and what context it is, doesn’t make any sense,” Atick holds. “You have to have a policy that is basically a risk-based policy. And we have lots of experience in that. Some countries started with their own policies, and over time, they started to understand it. Luckily, there is a lot of knowledge now that we can share on this point. This is why we’re doing the Financial Inclusion Symposium at the ID4Africa Annual General Meeting next year [in Addis Ababa], because these countries are going to share their knowledge and experiences.” “The symposium at the AGM will basically be on digital identity and finance. It’s going to focus on the stages of financial inclusion, and what are the risk-based policies countries must put in place to achieve the desired outcome, which is a low-cost, high-robustness and trustworthy ecosystem that enables anybody to enter the system and to conduct transactions securely.”


2025 Data Outlook: Strategic Insights for the Road Ahead

By embracing localised data processing, companies can turn compliance into an advantage, driving innovations such as data barter markets and sovereignty-specific data products. Data sovereignty isn’t merely a regulatory checkbox—it’s about Citizen Data Rights. With most consumer data being unstructured and often ignored, organisations can no longer afford complacency. Prioritising unstructured data management will be crucial as personal information needs to be identified, cataloged, and protected at a granular level from inception through intelligent, policy-based automation. ... Individuals are gaining more control over their personal information and expect transparency, control, and digital trust from organisations. As a result, businesses will shift to self-service data management, enabling data stewards across departments to actively participate in privacy practices. This evolution moves privacy management out of IT silos, embedding it into daily operations across the organisation. Organisations that embrace this change will implement a “Data Democracy by Design” approach, incorporating self-service privacy dashboards, personalised data management workflows, and Role-Based Access Control (RBAC) for data stewards. 


Defining & Defying Cybersecurity Staff Burnout

According to the van Dam article, burnout happens when an employee buries their experience of chronic stress for years. The people who burn out are often formerly great performers, perfectionists who exhibit perseverance. But if the person perseveres in a situation where they don't have control, they can experience the kind of morale-killing stress that, left unaddressed for months and years, leads to burnout. In such cases, "perseverance is not adaptive anymore and individuals should shift to other coping strategies like asking for social support and reflecting on one's situation and feelings," the article read. ... Employees sometimes scoff at the wellness programs companies put out as an attempt to keep people healthy. "Most 'corporate' solutions — use this app! attend this webinar! — felt juvenile and unhelpful," Eden says. And it does seem like many solutions fall into the same quick-fix category as home improvement hacks or dump dinner recipes. Christina Maslach's scholarly work attributed work stress to six main sources: workload, values, reward, control, fairness, and community. An even quicker assessment is promised by the Matches Measure from Cindy Muir Zapata. 


Revolutionizing Cloud Security for Future Threats

Is it possible that embracing Non-Human Identities can help us bridge the resource gap in cybersecurity? The answer is a definite yes. The cybersecurity field is chronically understaffed and for firms to successfully safeguard their digital assets, they must be equipped to handle an infinite number of parallel tasks. This demands a new breed of solutions such as NHIs and Secrets Security Management that offer automation at a scale hitherto unseen. NHIs have the potential to take over tedious tasks like secret rotation, identity lifecycle management, and security compliance management. By automating these tasks, NHIs free up the cybersecurity workforce to concentrate on more strategic initiatives, thereby improving the overall efficiency of your security operations. Moreover, through AI-enhanced NHI Management platforms, we can provide better insights into system vulnerabilities and usage patterns, considerably improving context-aware security. Can the concept of Non-Human Identities extend its relevance beyond the IT sector? ... From healthcare institutions safeguarding sensitive patient data, financial services firms securing transactional data, travel companies protecting customer data, to DevOps teams looking to maintain the integrity of their codebases, the strategic relevance of NHIs is widespread.


Digital Transformation: Making Information Work for You

Digital transformation is changing the organization from one state to another through the use of electronic devices that leverage information. Oftentimes, this entails process improvement and process reengineering to convert business interactions from human-to-human to human-to-computer-to-human. By introducing the element of the computer into human-to-human transactions, there is a digital breadcrumb left behind. This digital record of the transaction is important in making digital transformations successful and is the key to how analytics can enable more successful digital transformations. In a human-to-human interaction, information is transferred from one party to another, but it generally stops there. With the introduction of the digital element in the middle, the data is captured, stored, and available for analysis, dissemination, and amplification. This is where data analytics shines. If an organization stops with data storage, they are missing the lion’s share of the potential value of a digital transformation initiative. Organizations that focus only on collecting data from all their transactions and sinking this into a data lake often find that their efforts are in vain. They end up with a data swamp where data goes to die and never fully realize its potential value. 


Secure and Simplify SD-Branch Networks

The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... The traditional WAN relies on expensive MPLS connectivity and a hub-and-spoke architecture that backhauls all traffic through the corporate data centre for centralized security checks. This approach creates bottlenecks that interfere with network performance and reliability. In addition to users demanding fast and reliable access to resources, IoT applications need reliable WAN connections to leverage cloud-based management and big data repositories. ... To reduce complexity and appliance sprawl, SD-Branch consolidates networking and security capabilities into a single solution that provides seamless protection of distributed environments. It covers all critical branch edges, from the WAN edge to the branch access layer to a full spectrum of endpoint devices. 


Breaking up is hard to do: Chunking in RAG applications

The most basic is to chunk text into fixed sizes. This works for fairly homogenous datasets that use content of similar formats and sizes, like news articles or blog posts. It’s the cheapest method in terms of the amount of compute you’ll need, but it doesn’t take into account the context of the content that you’re chunking. That might not matter for your use case, but it might end up mattering a lot. You could also use random chunk sizes if your dataset is a non-homogenous collection of multiple document types. This approach can potentially capture a wider variety of semantic contexts and topics without relying on the conventions of any given document type. Random chunks are a gamble, though, as you might end up breaking content across sentences and paragraphs, leading to meaningless chunks of text. For both of these types, you can apply the chunking method over sliding windows; that is, instead of starting new chunks at the end of the previous chunk, new chunks overlap the content of the previous one and contain part of it. This can better capture the context around the edges of each chunk and increase the semantic relevance of your overall system. The tradeoff is that it requires greater storage requirements and can store redundant information.


What is quantum supremacy?

A definitive achievement of quantum supremacy will require either a significant reduction in quantum hardware's error rates or a better theoretical understanding of what kind of noise classical approaches can exploit to help simulate the behavior of error-prone quantum computers, Fefferman said. But this back-and-forth between quantum and classical approaches is helping push the field forwards, he added, creating a virtuous cycle that is helping quantum hardware developers understand where they need to improve. "Because of this cycle, the experiments have improved dramatically," Fefferman said. "And as a theorist coming up with these classical algorithms, I hope that eventually, I'm not able to do it anymore." While it's uncertain whether quantum supremacy has already been reached, it's clear that we are on the cusp of it, Benjamin said. But it's important to remember that reaching this milestone would be a largely academic and symbolic achievement, as the problems being tackled are of no practical use. "We're at that threshold, roughly speaking, but it isn't an interesting threshold, because on the other side of it, nothing magic happens," Benjamin said. ... That's why many in the field are refocusing their efforts on a new goal: demonstrating "quantum utility," or the ability to show a significant speedup over classical computers on a practically useful problem.


Shift left security — Good intentions, poor execution, and ways to fix it

One of the first steps is changing the way security is integrated into development. Instead of focusing on a “gotcha”, after-the-fact approach, we need security to assist us as early as possible in the process: as we write the code. By guiding us as we’re still in ‘work-in-progress’ mode with our code, security can adopt a positive coaching and helping stance, nudging us to correct issues before they become problems and go clutter our backlog. ... The security tools we use need to catch vulnerabilities early enough so that nobody circles back to fix boomerang issues later. Very much in line with my previous point, detecting and fixing vulnerabilities as we code saves time and preserves focus. This also reduces the back-and-forth in peer reviews, making the entire process smoother and more efficient. By embedding security more deeply into the development workflow, we can address security issues without disrupting productivity. ... When it comes to security training, we need a more focused approach. Developers don’t need to become experts in every aspect of code security, but we do need to be equipped with the knowledge that’s directly relevant to the work we’re doing, when we’re doing it — as we code. Instead of broad, one-size-fits-all training programs, let’s focus on addressing specific knowledge gaps we personally have. 



Quote for the day:

“Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them.” -- Vaibhav Shah

Daily Tech Digest - December 27, 2024

Software-Defined Vehicles: Onward and Upward

"SDV is about building efficient methodologies to develop, test and deploy software in a scalable way," he said. AWS, through initiatives such as The Connected Vehicle Systems Alliance and standardized protocols such as Vehicle Signal Specification, is helping OEMs standardize vehicle communication. This approach reduces the complexity of vehicle software and enables faster development cycles. BMW's virtualized infotainment system, built using AWS cloud services, is a use case of how standardization and cloud technology enable more efficient development. ... Gen AI, according to Marzani, is the next and most fascinating frontier for automotive innovation. AWS has already begun integrating AI into vehicle design and user experiences. It is helping OEMs develop in-car assistants that can provide real-time, context-aware information, such as interpreting warning signals or offering maintenance advice. But Marzani cautioned against deploying such systems without rigorous testing. "If an assistant misinterprets a warning and gives incorrect advice, the consequences could be severe. That's why we test these models in virtualized environments before deploying them in real-world scenarios." 


The End of Dashboard Frustration: AI Powers New Era of Analytics

Enterprises can tackle the workflow friction challenge by embedding analytics directly into users' existing applications. Most applications these days are delivered on a SaaS basis, which means a web browser is the primary interface for employees' daily workflow. With the assistance of a browser plug-in, keywords can be highlighted to show critical information about any business entity, from customer profiles to product details, making data instantly accessible within the user's natural workflow. There's no need to open another application and lose time on task switching — the data is automatically presented within the natural course of an employee's operations. To address varying levels of data expertise, enterprises can take a hybrid approach that combines the natural language capabilities of large language models (LLMs) with the precision of traditional BI tools. In this way, an AI-powered BI assistant can translate natural language queries into precise data analytics operations. Employees will no longer need to know how to form specific, technical queries to get the data they need. Instead, they can simply ask a bot using ordinary text, just as if they were interacting with a human being. 


The Intersection of AI and OSINT: Advanced Threats On The Horizon

Scammers and cybercriminals constantly monitor public information to collect insight on people, businesses and systems. They research social media profiles, public records, company websites, press releases, etc., to identify vulnerabilities and potential targets. What might seem like harmless information such as a job change, a location-tagged photograph, stories in media, online interests and affiliations can be pieced together to build a comprehensive profile of a target, enabling threat actors to launch targeted social engineering attacks. And it’s not just social media that threat actors are tracking and monitoring. They are known to research things like leaked credentials, IP addresses, bitcoin wallet addresses, exploitable assets such as open ports, vulnerabilities in websites, internet-exposed devices such as Internet of Things (IoT), servers and more. A range of OSINT tools are easily available to discover information about a company’s employees, assets and other confidential information. While OSINT offers significant benefits to cybercriminals, there is also a real challenge of collecting and analyzing publicly available data. Sometimes information is easy to find, sometimes extensive exercise is needed to uncover loopholes and buried information.


The Expanding Dark Web Toolkit Using AI to Fuel Modern Phishing Attacks

Phishing is no longer limited to simple social engineering approaches; it has grown into a complex, multi-layered attack vector that employs dark web tools, AI, and undetectable malware. The availability of phishing kits and advanced cyber tools are making it easier than ever for novices to develop their malicious capabilities. Stopping these attacks can be tricky, given how convincing the websites and emails can appear to users. However, organizations and individuals must be vigilant in their efforts and continue to use regular security awareness training to educate users, employees, partners, and clients on the evolving dangers. All users should be reminded to never give out sensitive credentials to emails and never respond to unfamiliar links, phone calls, or messages received. Using a zero-trust architecture for continuous verification is essential while also maintaining vigilance when visiting websites or social media apps. Additionally, modern threat detection tools employing AI and advanced machine learning can help to understand incoming threats and immediately flag them ahead of user involvement. The use of MFA and biometric verification has a critical role to play, as do regular software updates and immediate patching of servers or loopholes/vulnerabilities. 


Infrastructure as Code in 2024: Why It’s Still So Terrible

The problem, Siva wrote, is”when a developer decides to replace a manually managed storage bucket with a third-party service alternative, the corresponding IaC scripts must also be manually updated, which becomes cumbersome and error-prone as projects scale. The desync that occurs between the application and its runtime can lead to serious security implications, where resources are granted far more permissions than they require or are left rogue and forgotten.” He added, “Infrastructure from Code automates the bits that were previously manual in nature. Whenever an application changes, IfC can help provision resources and configurations that accurately reflect its runtime requirements, eliminating much of the manual work typically involved.” ... The open source work around OpenTofu may point the way forward out of this mess. Or at least that is the view of industry observer Kelsey Hightower, who likened the open sourcing of Terraform to the opening of technologies that made the Internet possible, making OpenTofu to be the "HTTP of the cloud," wrote Ohad Maislish, CEO and co-founder of env0. "For Terraform technology to achieve universal HTTP-like adoption, it had to outgrow its commercial origins," Maislish wrote. "In other words: Before it could belong to everyone, it needed to be owned by no one."


CISA mandates secure cloud baselines for US agencies

The directive prescribes actionable measures such as the adoption of secure baselines, automated compliance tooling, and integration with security monitoring systems. These steps are in line with modern security models aimed at strengthening the security of the new attack surface presented by SaaS applications. Cory Michal highlighted both the practicality and challenges of the directive: "The requirements are reasonable, as the directive focuses on practical, actionable measures like adopting secure baselines, automated compliance tooling, and integration with security monitoring systems. These are foundational steps that align with modern SaaS and cloud security models following the Identify, Protect, Detect and Respond methodology, allowing organizations to embrace and secure this new attack surface." However, Michal also pointed out significant hurdles, including deadlines, funding, and skillset shortages, that agencies may face in complying with the directive. Many agencies may lack the skilled personnel and financial resources necessary to implement and manage these security measures. "Deadlines, lack of funding and lack of adequate skillsets will be the main challenges in meeting these requirements.


Data protection challenges abound as volumes surge and threats evolve

Data security experts say CISOs can cope with these changes by understanding the nature of the shifting landscape, implementing foundational risk management strategies, and reaching for new tools that better protect data and quickly identify when adverse data events are underway. Although the advent of artificial intelligence increases data protection challenges, experts say AI can also help fill in some of the cracks in existing data protection programs. ... Experts say that what most CISOs should consider in running their data protection platforms is a wide range of complex security strategies that involve identifying and classifying information based on its sensitivity, establishing access controls and encryption mechanisms, implementing proper authentication and authorization processes, adopting secure storage and transmission methods and continuously monitoring and detecting potential security incidents. ... However, before considering these highly involved efforts, CISOs must first identify where data exists within their organizations, which is no easy feat. “Discover all your data or discover the data in the important locations,” Benjamin says. “You’ll never be able to discover everything but discover the data in the important locations, whether in your office, in G Suite, in your cloud, in your HR systems, and so on. Discover the important data.”


How to Create an Enterprise-Wide Cybersecurity Culture

Cybersecurity culture planning requires a cross-organizational effort. While the CISO or CSO typically leads, the tone must be set from the top with active board involvement, Sullivan says. "The C-suite should integrate cybersecurity into business strategy, and key stakeholders from IT, legal, HR, finance, and operations must collaborate to address an ever-evolving threat landscape." She adds that engaging employees at all levels through continuous education will ensure that cybersecurity becomes everyone's responsibility. ... A big mistake many organizations make is treating cybersecurity as a separate initiative that's disconnected from the organization’s core mission, Sullivan says. "Cybersecurity should be recognized as a critical business imperative that requires board and C-suite-level attention and strategic oversight." Creating a healthy network security culture is an ongoing process that involves continuous learning, adaptation, and collaboration among teams, Tadmor says. This requires more thought than just setting policies -- it's also about integrating security practices into daily routines and workflows. "Regular training, open communication, and real-time monitoring are essential components to keep the culture alive and responsive to emerging network threats," he says.


What is serverless? Serverless computing explained

Serverless computing is an execution model for the cloud in which a cloud provider dynamically allocates only the compute resources and storage needed to execute a particular piece of code. Naturally, there are still servers involved, but the provider manages the provisioning and maintenance. ... Developers can focus on the business goals of the code they write, rather than on infrastructure questions. This simplifies and speeds up the development process and improves developer productivity. Organizations only pay for the compute resources they use in a very granular fashion, rather than buying physical hardware or renting cloud instances that mostly sit idle. That latter point is of particular benefit to event-driven applications that are idle much of the time but under certain conditions must handle many event requests at once. ... Serverless functions also must be tailored to the specific platform they run on. This can result in vendor lock-in and less flexibility. Although there are open source options available, the serverless market is dominated by the big three commercial cloud providers. Development teams often end up using tooling from their serverless vendor, which makes it hard to switch. 


How In-Person Banking Can Survive the Digital Age

Today’s consumer quite rightly expects banks to not merely support environmental and sustainable causes but to actively be using those principles within their work. Pioneers like The Co-operative Bank in the UK have been asking us to help them in this area for more than two decades, and the approach is spreading worldwide: We recently helped Saudi National Bank adopt best sustainability practice. There is much more that banks can do to integrate their digital and physical experiences in branch in the way that retailers and casual dining spaces are now doing. Indeed, banks could look more closely to hospitality for inspiration in many areas. ... There’s a slightly ironic conundrum that banks and credit unions would do well to consider: Banks don’t want branches, but they need them; customers don’t need branches, but they want them. Unlocking the potential and value here is about maintaining physical points of presence but re-inventing their role. They need to become venues not for ‘lower order’ basic transactional activities, as dominated their activity in the past; but for ‘higher order’ financial life support for communities and individuals. It’s the latter that explains why customers want branches even when there’s no apparent functional need.



Quote for the day:

"The only way to discover the limits of the possible is to go beyond them into the impossible." -- Arthur C. Clarke

Daily Tech Digest - December 26, 2024

Best Practices for Managing Hybrid Cloud Data Governance

Kausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise. “To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility,” he says. End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient. ... Integrating legacy systems with modern data governance solutions involves several steps. Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems. “However, this metadata is often absent or limited in scope within legacy systems,” says Elsberry. Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs. Elsberry notes a common blocking issue is the lack of REST API integration. Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration. “Gradually updating legacy systems to support modern data governance requirements is also essential,” he says.


These Founders Are Using AI to Expose and Eliminate Security Risks in Smart Contracts

The vulnerabilities lurking in smart contracts are well-known but often underestimated. “Some of the most common issues include Hidden Mint functions, where attackers inflate token supply, or Hidden Balance Updates, which allow arbitrary adjustments to user balances,” O’Connor says. These aren’t isolated risks—they happen far too frequently across the ecosystem. ... “AI allows us to analyze huge datasets, identify patterns, and catch anomalies that might indicate vulnerabilities,” O’Connor explains. Machine learning models, for instance, can flag issues like reentrancy attacks, unchecked external calls, or manipulation of minting functions—and they do it in real-time. “What sets AI apart is its ability to work with bytecode,” he adds. “Almost all smart contracts are deployed as bytecode, not human-readable code. Without advanced tools, you’re essentially flying blind.” ... As blockchain matures, smart contract security is no longer the sole concern of developers. It’s an industry-wide challenge that impacts everyone, from individual users to large enterprises. DeFi platforms increasingly rely on automated tools to monitor contracts and secure user funds. Centralized exchanges like Binance and Coinbase assess token safety before listing new assets. 


Three best change management practices to take on board in 2025

For change management to truly succeed, companies need to move from being change-resistant to change-ready. This means building up "change muscles" -- helping teams become adaptable and comfortable with change over the long term. For Mel Burke, VP of US operations at Grayce, the key to successful change is speaking to both the "head" and the "heart" of your stakeholders. Involve employees in the change process by giving them a voice and the ability to shape it as it happens. ... Change management works best when you focus on the biggest risks first and reduce the chance of major disruptions. Dedman calls this strategy "change enablement," where change initiatives are evaluated and scored on critical factors like team expertise, system dependencies, and potential customer impact. High-scorers get marked red for immediate attention, while lower-risk ones stay green for routine monitoring to keep the process focused and efficient. ... Peter Wood, CTO of Spectrum Search, swears by creating a "success signals framework" that combines data-driven metrics with culture-focused indicators. "System uptime and user adoption rates are crucial," he notes, "but so are team satisfaction surveys and employee retention 12-18 months post-change." 


Corporate Data Governance: The Cornerstone of Successful Digital Transformation

While traditional data governance focuses on the continuous and tactical management of data assets – ensuring data quality, consistency, and security – corporate data governance elevates this practice by integrating it with the organization’s overall governance framework and strategic objectives. It ensures that data management practices are not operating in silos but are harmoniously aligned and integrated with business goals, regulatory requirements, and ethical standards. In essence, corporate data governance acts as a bridge between data management and corporate strategy, ensuring that every data-related activity contributes to the organization’s mission and objectives. ... In the digital age, data is a critical asset that can drive innovation, efficiency, and competitive advantage. However, without proper governance, data initiatives can become disjointed, risky, and misaligned with corporate goals. Corporate data governance ensures that data management practices are strategically integrated with the organization’s mission, enabling businesses to leverage data confidently and effectively. By focusing on alignment, organizations can make better decisions, respond swiftly to market changes, and build stronger relationships with customers. 


What is an IT consultant? Roles, types, salaries, and how to become one

Because technology is continuously changing, IT consultants can provide clients with the latest information about new technologies as they become available, recommending implementation strategies based on their clients’ needs. As a result, for IT consultants, keeping the pulse of the technology market is essential. “Being a successful IT consultant requires knowing how to walk in the shoes of your IT clients and their business leaders,” says Scott Buchholz, CTO of the government and public services sector practice at consulting firm Deloitte. A consultant’s job is to assess the whole situation, the challenges, and the opportunities at an organization, Buchholz says. As an outsider, the consultant can see things clients can’t. ... “We’re seeing the most in-demand types of consultants being those who specialize in cybersecurity and digital transformation, largely due to increased reliance on remote work and increased risk of cyberattacks,” he says. In addition, consultants with program management skills are valuable for supporting technology projects, assessing technology strategies, and helping organizations compare and make informed decisions about their technology investments, Farnsworth says.


Blockchain + AI: Decentralized Machine Learning Platforms Changing the Game

Tech giants with vast computing resources and proprietary datasets have long dominated traditional AI development. Companies like Google, Amazon, and Microsoft have maintained a virtual monopoly on advanced AI capabilities, creating a significant barrier to entry for smaller players and independent researchers. However, the introduction of blockchain technology and cryptocurrency incentives is rapidly changing this paradigm. Decentralized machine learning platforms leverage blockchain's distributed nature to create vast networks of computing power. These networks function like a global supercomputer, where participants can contribute their unused computing resources in exchange for cryptocurrency tokens. ... The technical architecture of these platforms typically consists of several key components. Smart contracts manage the distribution of computational tasks and token rewards, ensuring transparent and automatic execution of agreements between parties. Distributed storage solutions like IPFS (InterPlanetary File System) handle the massive datasets required for AI training, while blockchain networks maintain an immutable record of transactions and model provenance.


DDoS Attacks Surge as Africa Expands Its Digital Footprint

A larger attack surface, however, is not the only reason for the increased DDoS activity in Africa and the Middle East, Hummel says. "Geopolitical tensions in these regions are also fueling a surge in hacktivist activity as real-world political disputes spill over into the digital world," he says. "Unfortunately, hacktivists often target critical infrastructure like government services, utilities, and banks to cause maximum disruption." And DDoS attacks are by no means the only manifestation of the new threats that organizations in Africa are having to contend with as they broaden their digital footprint. ... Attacks on critical infrastructure and financially motived attacks by organized crime are other looming concerns. In the center's assessment, Africa's government networks and networks belonging to the military, banking, and telecom sectors are all vulnerable to disruptive cyberattacks. Exacerbating the concern is the relatively high potential for cyber incidents resulting from negligence and accidents. Organized crime gangs — the scourge of organizations in the US, Europe, and other parts of the world, present an emerging threat to organizations in Africa, the Center has assessed. 


Optimizing AI Workflows for Hybrid IT Environments

Hybrid IT offers flexibility by combining the scalability of the cloud with the control of on-premises resources, allowing companies to allocate their resources more precisely. However, this setup also introduces complexity. Managing data flow, ensuring security, and maintaining operational efficiency across such a blended environment can become an overwhelming task if not addressed strategically. To manage AI workflows effectively in this kind of setup, businesses must focus on harmonizing infrastructure and resources. ... Performance optimization is crucial when running AI workloads across hybrid environments. This requires real-time monitoring of both on-premises and cloud systems to identify bottlenecks and inefficiencies. Implementing performance management tools allows for end-to-end visibility of AI workflows, enabling teams to proactively address performance issues before they escalate. ... Scalability also supports agility, which is crucial for businesses that need to grow and iterate on AI models frequently. Cloud-based services, in particular, allow teams to experiment and test AI models without being constrained by on-premises hardware limitations. This flexibility is essential for staying competitive in fields where AI innovation happens rapidly.


The Cloud Back-Flip

Cloud repatriation is driven by various factors, including high cloud bills, hidden costs, complexity, data sovereignty, and the need for greater data control. In markets like India—and globally—these factors are all relevant today, points out Vishal Kamani – Cloud Business Head, Kyndryl India. “Currently, rising cloud costs and complexity are part of the ‘learning curve’ for enterprises transitioning to cloud operations.” ... While cloud repatriation is not an alien concept anymore, such reverse migration back to on-premises data centres is seen happening only in organisations that are technology-driven and have deep tech expertise, observes Gaurang Pandya, Director, Deloitte India. “This involves them focusing back on the basics of IT infrastructure which does need a high number of skilled employees. The major driver for such reverse migration is increasing cloud prices and performance requirements. In an era of edge computing and 5G, each end system has now been equipped with much more computing resources than it ever had. This increases their expectations from various service providers.” Money is a big reason too- especially when you don’t know where is it going.


Why Great Programmers fail at Engineering

Being a good programmer is about mastering the details — syntax, algorithms, and efficiency. But being a great engineer? That’s about seeing the bigger picture: understanding systems, designing for scale, collaborating with teams, and ultimately creating software that not only works but excels in the messy, ever-changing real world. ... Good programmers focus on mastering their tools — languages, libraries, and frameworks — and take pride in crafting solutions that are both functional and beautiful. They are the “builders” who bring ideas to life one line of code at a time. ... Software engineering requires a keen understanding of design principles and system architecture. Great code in a poorly designed system is like building a solid wall in a crumbling house — it doesn’t matter how good it looks if the foundation is flawed. Many programmers struggle to:Design systems for scalability and maintainability. Think in terms of trade-offs, such as performance vs. development speed. Plan for edge cases and future growth. Software engineering is as much about people as it is about code. Great engineers collaborate with teams, communicate ideas clearly, and balance stakeholder expectations. ... Programming success is often measured by how well the code runs, but engineering success is about how well the system solves a real-world problem.



Quote for the day:

"Ambition is the path to success. Persistence is the vehicle you arrive in." -- Bill Bradley

Daily Tech Digest - December 25, 2024

The promise and perils of synthetic data

Synthetic data is no panacea, however. It suffers from the same “garbage in, garbage out” problem as all AI. Models create synthetic data, and if the data used to train these models has biases and limitations, their outputs will be similarly tainted. For instance, groups poorly represented in the base data will be so in the synthetic data. “The problem is, you can only do so much,” Keyes said. “Say you only have 30 Black people in a dataset. Extrapolating out might help, but if those 30 people are all middle-class, or all light-skinned, that’s what the ‘representative’ data will all look like.” To this point, a 2023 study by researchers at Rice University and Stanford found that over-reliance on synthetic data during training can create models whose “quality or diversity progressively decrease.” Sampling bias — poor representation of the real world — causes a model’s diversity to worsen after a few generations of training, according to the researchers. Keyes sees additional risks in complex models such as OpenAI’s o1, which he thinks could produce harder-to-spot hallucinations in their synthetic data. These, in turn, could reduce the accuracy of models trained on the data — especially if the hallucinations’ sources aren’t easy to identify.


Federal Privacy Is Inevitable in The US (Prepare Now)

The writing’s on the wall for federal privacy. It’s simply not tenable for almost half the states having varying privacy thresholds and the other half with nothing. Our interconnected business and digital ecosystems need certainty and consistency across the country. Congress can and should stand up for American privacy. The good news? Recent history shows that sweeping reforms are possible. From the CHIPS and Science Act to major pandemic stimulus, lawmakers have shown their ability to meet moments with big regulations. While states deserve credit for filling the privacy void, federal action must follow. For now, there’s no time to waste. Enterprises that build privacy-ready operations today will be better positioned to thrive under future regulations, maintain customer trust, and turn compliance into a competitive advantage. On the other hand, slow-to-move companies risk regulatory penalties and loss of customer confidence in an increasingly privacy-conscious marketplace. Future-forward organizations recognize that investing in privacy isn’t just about compliance; it’s about building a sustainable competitive advantage in the data-driven economy. The choice is clear: invest in privacy now or play catch-up when federal mandates arrive.


AI use cases are going to get even bigger in 2025

Few sectors stand to gain more from AI advancements than defense. “We are witnessing a surge in applications like autonomous drone swarms, electronic spectrum awareness, and real-time battlefield space management, where AI, edge computing, and sensor technologies are integrated to enable faster responses and enhanced precision,” says Meir Friedland, CEO at RF spectrum intelligence company Sensorz. ... “AI is transforming genome sequencing, enabling faster and more accurate analyses of genetic data,” Khalfan Belhoul, CEO at the Dubai Future Foundation, tells Fast Company. “Already, the largest genome banks in the U.K. and the UAE each have over half a million samples, but soon, one genome bank will surpass this with a million samples.” But what does this mean? “It means we are entering an era where healthcare can truly become personalized, where we can anticipate and prevent certain diseases before they even develop,” Belhoul says. ... The potential for AI extends far beyond the use cases dominating today’s headlines. As Friedland notes, “AI’s future lies in multi-domain coordination, edge computing, and autonomous systems.” These advancements are already reshaping industries like manufacturing, agriculture, and finance.


2025 Will Be the Year That AI Agents Transform Crypto

The value of AI agents lies not just in their utility but in their potential to scale human capabilities. Agents are no longer just tools — they are emerging as participants in the on-chain economy, driving innovation across finance, gaming and decentralized social platforms. With protocols such as Virtuals and open-source frameworks like ELIZA, it’s becoming increasingly simple for developers to build, deploy and iterate AI agents that serve an increasingly diverse set of use cases. ... Unlike the core foundational AI models that are developed behind the walled gardens of OpenAI and Anthropic, AI agents are being innovated in the trenches of the crypto world. And for good reason. Blockchains provide the ideal infrastructure as they offer permissionless and frictionless financial rails, enabling agents to seed wallets, transact and send funds autonomously — tasks that would be unfeasible using traditional financial systems. In addition, the open-source nature of crypto allows developers to leverage existing frameworks to launch and iterate on agents faster than ever before. With more no-code platforms like Top Hat gaining traction, it’s only getting easier for anyone to be able to launch an agent in minutes. 


Unpacking OpenAI's Latest Approach to Make AI Safer

OpenAI said it used an internal reasoning model to generate synthetic examples of chain-of-thought responses, each referencing specific elements of the company's safety policy. Another model, referred to as the "judge," evaluated these examples to meet quality standards. The approach looks to address the challenges of scalability and consistency, OpenAI said. Human-labeled datasets are labor-intensive and prone to variability, but properly vetted synthetic data can theoretically offer a scalable solution with uniform quality. The method can potentially optimize training and reduce the latency and computational overhead associated with the models reading lengthy safety documents during inference. OpenAI acknowledged that aligning AI models with human safety values remains a challenge. Users continue to develop jailbreak techniques to bypass safety restrictions, such as framing malicious requests in deceptive or emotionally charged contexts. The o3 series models scored better than its peers Gemini 1.5 Flash, GPT-4o and Claude 3.5 Sonnet on the Pareto benchmark, which measures a model's ability to resist common jailbreak strategies. But the results may be of little consequence, as adversarial attacks evolve alongside improvements in model defenses.


The yellow brick road to agentic AI

Many believe this AI era is the most profound we’ve ever seen in tech. We agree and liken it to mobile’s role in driving on-premises workloads to the cloud and disrupting information technology. But we see this as even more impactful. But for AI agents to work we have to reinvent the software stack and break down 50 years of silo building. The emergence of data lakehouses is not the answer as they are just a bigger siloed asset. Rather, software as a service as we know it will be reimagined. Two prominent chief executives agree. At Amazon Web Services Inc.’s recent AWS re:Invent conference, we sat down with Amazon.com Inc. CEO Andy Jassy. ... There is a clear business imperative behind this shift. We believe companies will differentiate themselves by aligning end-to-end operations with a unified set of plans — from three-year strategic assumptions about demand to real-time, minute-by-minute decisions, such as how to pick, pack and ship individual orders to meet long-term goals. The function of management has always involved planning and resource allocation across various timescales and geographies, but previously there was no software capable of executing on these plans seamlessly across every time horizon.


The AI backlash couldn’t have come at a better time

Developers, engineers, operations personnel, enterprise architects, IT managers, and others need AI to be as boring for them as it has become for consumers. They need it not to be a “thing,” but rather something that is managed and integrated seamlessly into — and supported by — the infrastructure stack and the tools they use to do their jobs. They don’t want to endlessly hear about AI; they just want AI to seamlessly work for them so it just works for customers. ... The models themselves are also, rightly, growing more mainstream. A year ago they were anything but, with talk of potentially gazillions of parameters and fears about the legal, privacy, financial, and even environmental challenges such a data abyss would create. Those LLLMs are still out there, and still growing, but many organizations are looking for their models to be far less extreme. They don’t need (or want) a model that includes everything anyone ever learned about anything; rather, they need models that are fine-tuned with data that is relevant to the business, that don’t necessarily require state-of-the-art GPUs, and that promote transparency and trust. As Matt Hicks, CEO of Red Hat, put it, “Small models unlock adoption.”


Systems Thinking in Leading Transformation for the Future

The first step is aligning your internal goals with your external insights. Leaders must articulate a clear vision that ties the organization's purpose to broader societal and industry trends. For Nooyi and PepsiCo, that meant “starting from the outside.” Nooyi tasked her senior leaders with identifying external factors that would likely impact the company. She said, “They pointed to several megatrends … including a preoccupation with health and wellness, scarcity of water and other natural resources, constraints created by global climate change … and a talent market characterized by shortages of key people.” ... Systems thinking involves understanding the interdependencies within and outside an organization. For example, if you are embarking on any transformation project, you’ll likely need to explore new partnerships with suppliers and regional authorities and regulators. ... Using frameworks like OKRs (Objectives and Key Results), you can evaluate how each initiative within your transformation program contributes to the overarching objective. For example, a laudable main aim such as a commitment to environmental sustainability would likely involve numerous associated projects: for example, water conservation, waste reduction, and reduced carbon footprint.


The 2024 cyberwar playbook: Tricks used by nation-state actors

While nation-state actors loved zero days for swift break-ins, phishing remained a sly plan B. It let them craft sneaky schemes to worm into systems, proving that 2024 was the year of both bold strikes and artful cons. Russian nation-state actors leaned heavily on phishing in 2024, with other APTs, like Iranian and Pakistani groups, dabbling in the tactic as well. The following are some of the standout campaigns from 2024 where phishing was the go-to for initial access. ... While credential harvesting through malware delivered via phishing was fairly common, nation-state actors rarely resorted to scavenging credentials from hack forums or drop sites as a primary tactic. When asked, Hughes noted, “I’m not familiar with this being the primary MO by the APTs, who instead are targeting devices, products and vendors with vulnerabilities and misconfigurations, but once inside, they do compromise credentials and use those to pivot, move laterally, persist in environments and more.” ... These actors weren’t always about flashy, custom malware. Quite often, they used legit tools like PowerShell, rootkits, RDP, and other off-the-shelf system features to sneak in, stay undetected, and set up long-term access. This made their attacks stealthy, persistent, and ready for future moves. 


Generative AI is now a must-have tool for technology professionals

As part of this trend, "we are witnessing developers shift from writing code to orchestrating AI agents," said Jithin Bhasker, general manager and vice president at ServiceNow. The efficiency gained from gen AI adoption by technologists isn't just about personal productivity, it's urgent "with the projected shortage of half a million developers by 2030 and the need for a billion new apps," he added. ... Still, as gen AI becomes a commonplace tool in technology shops, Berent-Spillson advises caution. "The real game-changer here is speed, but there's a catch," he said. "While AI can dramatically compress cycle time, it will also amplify any existing process constraints. Think of it like adding a supercharger to your car -- if your chassis isn't solid, you're just going to get to the problem faster." Exercise caution "regarding code quality, maintainability, and IP considerations," McDonagh-Smith advises. "While syntactically correct, AI tools have been seen to create code that's logically flawed or inefficient, leading to potential code degradation over time if not reviewed carefully. We should also guard against software sprawl where the ease of creating AI-generated code results in overly complex or unnecessary code that might make projects more difficult to maintain over time."



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - December 24, 2024

Concerns over the security of electronic personal health information intensifies

When entities outside HIPAA’s purview experience breaches, the Federal Trade Commission (FTC) Health Breach Notification Rule applies. However, this dual system creates confusion among stakeholders, who must navigate overlapping jurisdictions. The lack of a unified, comprehensive framework exacerbates the problem, leaving patients uncertain about the security of their health data. Another pressing concern is the cybersecurity of medical devices. Many modern medical devices connect to networks or the internet, increasing their susceptibility to cyberattacks. Hospitals often operate thousands of interconnected devices, making it challenging to monitor and secure every endpoint. Insecure devices not only endanger patient privacy but also jeopardize care delivery. For instance, a compromised infusion pump or defibrillator could have life-threatening consequences. The Food and Drug Administration (FDA) has taken steps to address these vulnerabilities through premarket and post-market cybersecurity guidelines. However, the onus of ensuring device security often falls into a gray area between manufacturers and healthcare providers. 


The rise of “soft” skills: How GenAI is reshaping developer roles

The successful developer in this evolving landscape will be one who can effectively combine technical expertise with strong interpersonal skills. This includes not only the ability to work with AI tools but also the capability to collaborate with both technical and non-technical stakeholders. After all, with less of a need for coders to do the low-level, routine work of software development, more emphasis will be placed on coders’ ability to collaborate with business managers to understand their goals and create technology solutions that will advance them. Additionally, the coding that they’ll be doing will be more complex and high-level, often requiring work with other developers to determine the best way forward. The emphasis on soft skills—including adaptability, communication, and collaboration—has become as crucial as technical proficiency. As the software development field continues to evolve, it’s clear that the future belongs to those who embrace AI as a powerful complement to their skills rather than viewing it as a threat. The coding profession isn’t disappearing—it’s transforming into a role that demands a more comprehensive skill set, combining technical mastery with strong interpersonal capabilities.


Top 10 Cybersecurity Trends to Expect in 2025

Zero-day vulnerabilities are still one of the major threats in cybersecurity. By definition, these faults remain unknown to software vendors and the larger security community, thus leaving systems exposed until a fix can be developed. Attackers are using zero-day exploits frequently and effectively, affecting even major companies, hence the need for proactive measures. Advanced threat actors use zero-day attacks to achieve goals including espionage and financial crimes. ... Integrating regional and local data privacy regulations such as GDPR and CCPA into the cybersecurity strategy is no longer optional. Companies need to look out for regulations that will become legally binding for the first time in 2025, such as the EU's AI Act. In 2025, regulators will continue to impose stricter guidelines related to data encryption and incident reporting, including in the realm of AI, showing rising concerns about online data misuse. Decentralized security models, such as blockchain, are being considered by some companies to reduce single points of failure. Such systems offer enhanced transparency to users and allow them much more control over their data. ... Verifying user identities has become more challenging as browsers enforce stricter privacy controls and attackers develop more sophisticated bots. 


Navigating AI in Aviation: A Roadmap for Risk and Security Management Professionals

The Roadmap for Artificial Intelligence Safety Assurance, recently published by FAA, recognizes the potential of AI on aviation and emphasizes the need for safety assurance, industry collaboration and incremental implementation. This roadmap, combined with other international frameworks, offers a global framework for managing AI risks in aviation. ... While AI demonstrates the potential for enhanced operational efficiency, predictive maintenance and even autonomous flight, these benefits come with significant security and compliance risks. ... Differentiating between learned AI (static) and learning AI (adaptive) poses a significant challenge in AI risk management. The FAA roadmap calls for continuous monitoring and assurance, especially for learning AI, echoing the need for dynamic risk assessment protocols like those recommended in NIST-AI-600-1 for managing generative AI models. ... Incorporating AI in aviation is far from straightforward, and due to human safety concerns, it involves navigating a constantly evolving landscape of risks and at times overbearing regulatory requirements. For risk and security professionals, the key task is to align AI technologies with operational safety and evolving regulatory requirements.


The Urgent Need for Data Minimization Standards

On one side of the spectrum is the redaction of direct identifiers such as names, or payment card information such as credit card numbers. On the other side of the spectrum lies anonymization, where re-identification of individuals is extremely unlikely. Within the spectrum, we also find pseudonymization, which, depending on the jurisdiction, often means something like reversible de-identification Many organizations are keen to anonymize their data because, if anonymization is achieved, the data falls outside of the scope of data protection laws as they are no longer considered personal information. ... We hold that the claim that data anonymization is impossible is based on a lack of clarity around what is required for anonymization, with organizations often either wittingly or unwittingly misusing the term for what is actually a redaction of direct identifiers. Furthermore, another common claim is that data minimization is in irresolvable tension with the use of data at a large scale in the machine learning context. This claim is not only based on a lack of clarity around data minimization but also a lack of understanding around the extremely valuable data that often surrounds identifiable information, such as data about products, conversation flows, document topics, and more.


How CISOs can make smarter risk decisions

Bot detection works by recognizing markers of bad bots, including requests originating from malicious domains and patterns of behavior exhibited. Establishing a baseline of normal human web activity and recognizing anomalous behavior from incoming traffic is at the core of effective bot detection.  ... Unsurprisingly, for businesses focused on managing users’ money, account takeover and carding attacks are common in the financial industry. In these instances, cybercriminals try to break into accounts and steal information from the payments page. As such, the financial industry has been an early adopter of cybersecurity protocols and tools to ensure a fully comprehensive and well-funded security program, while the travel and hospitality industries have not yet made that pivot in the same way. ... A good CISO makes balanced risk decisions. A bad CISO gets in the way of helping the company innovate. The combination of industry best practices and regulation forcing the adoption of robust security tooling and methodology pushes companies to create a strong baseline to build in effective protections. However, CISOs must evaluate carefully what assets they choose to put maximum security measures behind. If you argue that everything needs that high level of security, you become the CISO who cried wolf


Developers Are Key to Stopping Rising API Security Threat

Developers and security teams typically share responsibility for ensuring APIs are secure. “While the security team is ultimately responsible for the overall security posture of an organization, developers play a key role in building and managing secure APIs,” Whaley said. “They need to write secure code and implement security measures during the development phase, such as input validation, authentication, encryption and access control.” The security team defines and enforces security policies, he said. They’re also responsible for establishing governance frameworks and managing tools to monitor, detect and respond to threats. ... Developers also play an important role in remediating API security problems, he said. Their job is to implement fixes and ensure that vulnerabilities are properly addressed. emediating an incident can include fixing vulnerabilities, deploying patches and addressing any misconfigurations. But it can also sometimes mean hiring external help in the form of security consultants, investing in new security tools and covering any legal and compliance fees, he said. “Additionally, there are intangible factors to consider, like damage to brand reputation and loss of customer confidence, which can have a big impact even if they are harder to quantify,” Whaley added.


Companies Race to Use AI Security Against AI-Driven Threats

First, securing AI by design is crucial, as our customers increasingly rely on AI in their ecosystems. As a cybersecurity solution provider, our objective is to ensure our customers are protected when using new technologies. The second vector involves combating adversaries who use AI to launch attacks. The rate of these attacks is exponentially faster and more sophisticated than ever before. To counter this, we must utilize AI to protect against AI-driven attacks. The third vector focuses on how AI can benefit security practitioners. By simplifying complex data analysis and enhancing product interactions, AI can significantly improve the efficiency and effectiveness of security operations. Solutions such as AI Access Security, which provides visibility into AI usage within enterprises and ensures secure AI applications have seen development at 100 customers already benefiting from our AI security solutions, we see a clear shift in maturity levels. ... Autonomous SOCs are becoming a reality, driven by two key factors. First, adversaries are evolving at a pace that outstrips our ability to scale human resources. Second, there's a shortage of qualified cybersecurity talent. These dual pressures on both supply and demand - necessitate technological intervention. 


Overcoming modern observability challenges

Observability is crucial for quickly detecting issues and taking corrective actions to ensure that application performance does not negatively impact customer experience. With millions of transactions occurring every second, relying on traditional logic, predefined rules, and human intervention is no longer sufficient. According to a 2023 Gartner report, applied observability has emerged as one of the top 10 strategic technology trends, underscoring the increasing need for using AI to make smarter, more automated solutions to stay competitive​ and optimize business operations in real time. Today’s observability solutions must go beyond static monitoring by incorporating AI and machine learning to detect patterns, trends, and anomalies. By automatically identifying outliers and emerging issues, AI-driven systems reduce the mean time to detect (MTTD) and mean time to resolve (MTTR), driving efficiency and helping teams address potential problems before they affect end-users. ... Organizations need an observability solution that is comprehensive, cost-effective, and intelligent. The Kloudfuse observability platform is designed to monitor modern cloud-native workloads while optimizing costs, offering insights into model performance and mitigating risks. 


Managing Software Engineering Teams of Artificial Intelligence Developers

Regardless of its industry, every organization has an AI solution, is working on AI integration, or has a plan for it in its roadmap. While developers are being trained in the various technological skills needed for development, senior leadership must focus on strategies to integrate and align these efforts with the broader organization. ... Investing in AI alone will not guarantee success for the company. Avoid making investment decisions solely based on the Fear of Missing Out. For the business to thrive in the long run, it must focus on value creation through AI integration. Follow standard processes and conduct thorough due diligence to identify where AI can effectively drive value for your product. Collaborate closely with the product, business, and engineering teams to define the scope of work and develop a strategic vision that ensures alignment within the team. It is also crucial to achieve stakeholder alignment, especially given the complexity of the projects, while setting realistic expectations. ... As an engineering leader, invest in the right skills required for the project. Empower the team to make the best decisions. Building strong expertise in the teams and providing learning opportunities for the team by allowing them to attend learning sessions, conferences, hackathons, etc.



Quote for the day:

“It's failure that gives you the proper perspective on success.” -- Ellen DeGeneres