Daily Tech Digest - January 21, 2025

AI comes alive: From bartenders to surgical aides to puppies, tomorrow’s robots are on their way

The current generation of robots face three key challenges: processing visual information quickly enough to react in real-time; understanding the subtle cues in human behavior; and adapting to unexpected changes in their environment. Most humanoid robots today are dependent on cloud computing and the resulting network latency can make simple tasks like picking up an object difficult. ... Gen AI powers spatial intelligence by helping robots map their surroundings in real-time, much like humans do, predicting how objects might move or change. Such advancements are crucial for creating autonomous humanoid robots capable of navigating complex, real-world scenarios with the adaptability and decision-making skills needed for success. While spatial intelligence relies on real-time data to build mental maps of the environment, another approach is to help the humanoid robot infer the real world from a single still image. As explained in a pre-published paper, Generative World Explorer (GenEx) uses AI to create a detailed virtual world from a single image, mimicking how humans make inferences about their surroundings. ... Beyond the purely technical obstacles, potential societal objections must be overcome. 


Why some companies are backing away from the public cloud

Technical debt may be the root of many moves back to on-premise environments. "Normally this is a self-inflicted thing," Linthicum said. "They didn't refactor the applications to make them more efficient in running on the public cloud providers. So the public cloud providers, much like if we're pulling too much electricity off the grid, just hit them with huge bills to support the computational and storage needs of those under-optimized applications." Rather than spending more money to optimize or refactor applications, these same enterprises put them back on-premise, said Linthicum. Security and compliance are also an issue. Enterprises "realize that it's too expensive to remain compliant in the cloud, with data and sovereignty rules. So, they just make a decision to push it back on-premise." The perceived high costs of cloud operations "often stem from lift-and-shift migrations that in some cases didn't optimize applications for cloud environments," said Miha Kralj, global senior partner for hybrid cloud service at IBM Consulting. "These direct transfers typically maintain existing architectures that don't leverage cloud-native capabilities, resulting in inefficient resource utilization and unexpectedly high expenses." However, the solution to this problem "isn't necessarily repatriation to on-premises infrastructure," said Kralj. 


7 Common Pitfalls in Data Science Projects — and How to Avoid Them

It's worth noting, too, that just because data is of low quality at the start of a project doesn't mean the project is bound to fail. There are many effective techniques for improving data quality, such as data cleansing and standardization. When projects fail, it's typically because they failed to assess data quality and improve it as needed, not because the data was so poor in quality that there was no saving it. ... There are two key stakeholders in any data science project — the IT department, which is responsible for managing data assets, and business users, who determine what the data science project should achieve. Unfortunately, poor collaboration between these groups can cause projects to fail. For example, IT departments might decide to impose access restrictions on data without consulting business users, leading to situations where the business can't actually use the data in the way it intends. Or lack of input from business stakeholders about what they want to do may cause the IT team to struggle to determine how to deliver the data resources necessary to support a project. ... A final key challenge that can thwart data science project success is the failure to understand what the goals of data science are, and which methodologies and resources data science requires.


Facial recognition for borders and travel: 2025 trends and insights

Seamless and secure border crossings are crucial for a thriving travel industry. However, border control processes that still rely on traditional manual checks pose unnecessary risks to both national security and traveler satisfaction. Slow and cumbersome identity verification conducted by humans leads to long lines and frustrated travelers. This is where biometrics come in. Biometric technologies, particularly facial recognition, are revolutionizing border security by providing a faster, more secure and more efficient approach to verifying traveler identities. As passenger volumes continue to rise globally, transportation authorities and immigration agencies quickly realize the value of onboarding facial recognition technology to streamline busy and mission-critical border crossings — helping improve throughput, reduce wait times and enhance the overall traveler experience. ... By adopting advanced facial recognition technologies, immigration authorities can: Improve traveler experience. Self-service authentication shortens wait times and delivers a satisfying, hassle-free journey. Deliver fast and reliable authentication. The entire process to authenticate an individual is now accomplished in seconds.
Enhance border security. 


AI-Driven Microservices: The Future of Cloud Scalability

Even with modern auto-scaling in cloud platforms, the limitations are clear. Scaling remains largely reactive, with additional servers spinning up only after demand spikes are detected. This lag leads to temporary throttling and performance degradation. During peak times, over-provisioning results in wasted CPU and server utilization during subsequent low-traffic periods. The inadequacy of threshold-based auto-scaling becomes particularly apparent during high-traffic events like holiday sales. Engineers often find themselves on-call to handle performance issues manually, adding operational overhead and delaying service recovery. These systems lack predictive capabilities and struggle to optimize cost and performance simultaneously. ... AI offers a solution to these challenges. Through my experience with cloud-native platforms, I have seen how AI can transform scaling capabilities by incorporating predictive analytics. Instead of waiting for problems to occur, AI-driven systems can analyze historical patterns, current trends and multiple data points to anticipate resource needs in advance. This innovation has particular significance for smaller enterprises, enabling them to compete effectively with larger organizations that have traditionally dominated due to superior infrastructure capabilities. 


More AI, More Problems for Software Developers in 2025

Using AI to generate code can leave users — especially more junior developers — without the context the code was written with and who it was written for, making it harder to figure out what’s gone wrong. The risk is generally higher for junior developers. Senior developers tend to have a much better awareness and quicker understanding of the code that’s generated,” Reynolds observed. “Junior developers are under a lot of pressure to get the job done. They want to move fast, and they don’t necessarily have that contextual awareness of the code change.” Without quality and governance controls — like security scans and dependency checks, and unit, systems and integration testing — deployed throughout the software development lifecycle, he warned, the wrong thing is often merged. ... Shadow IT has developers looking to engineer their way out of a problem by adopting — and often even paying for — tools that aren’t among those officially approved by their employers. Shadow AI is an extension that sees, the report found, 52% of developers using AI tools that aren’t provided by or explicitly approved by IT. It’s not like developers are behaving insubordinately. The reality is, three years into widespread adoption of generative AI, most organizations still don’t have GenAI policies.


7 top cybersecurity projects for 2025

To effectively secure AI workloads, security teams should first gain an understanding of AI use within their enterprise, as well as the data and models used to power their business. “Next, assemble a cross-functional team to assess risks and develop a comprehensive security strategy,” Ramamoorthy advises. “Following best practices and adopting a secure AI framework will help to enable a strong security foundation and ensure that when AI models are implemented, they are secure by default.” ... With a successful TPRM project, your enterprise will have a better security posture, with fewer vulnerabilities and proactive control over outside hazards, Saine says. TPRM, backed by real-time monitoring and the ability to quickly respond to developing hazards, can also ensure compliance with pertinent laws, reducing the risk of fines and legal headaches. “Compliance will also help your enterprise project credibility and dependability to clients and partners,” he says. ... When implementing trust-by-design principles with AI-powered systems, security leaders should align their goals with overall enterprise objectives while obtaining buy-in from key executives and stakeholders. Additionally, conducting thorough assessments of the development processes can help identify vulnerabilities while prioritizing remediation and controls. 


The Tech Blanket: Building a Seamless Tech Ecosystem

Traditionally, organizations have built their technology strategies around “tech stacks”—discrete tools for solving specific problems. While effective in the short term, this approach often creates silos, with each department operating within its own set of platforms. Knowledge and data are trapped, preventing the organization from realizing its full potential. In 2024, many companies recognized the limitations of this approach and began prioritizing integration. This trend will deepen in 2025 as businesses build interconnected ecosystems where tools work together harmoniously. According to Deloitte, 58% of companies are shifting their focus toward integrating their platforms into unified ecosystems rather than continuing to invest in standalone tools.  ... One of the biggest challenges in building a seamless tech ecosystem is ensuring that tools communicate effectively. Selecting platforms that support open APIs is essential for facilitating easy integration. Open APIs allow different systems to share data and work together, eliminating friction and enabling better collaboration. In practical terms, this means teams can pull insights from a centralized knowledge management platform into other tools, such as CRM systems or analytics dashboards, without additional manual effort. The result? A more connected organization that can move at the speed of business.


AI Poised to Deliver Value, Innovation to Software Industry in 2025

“IoT technology has created a new level of visibility into complex, live systems and enables vital insights. By providing real-time data streams for millions of devices, IoT enables them to be monitored for issues and controlled from a distance. This will lead to ever-increasing safety, security, and efficiency in their operation. Smart buildings, transportation systems, logistics networks, and countless other applications all benefit from using IoT to provide essential services at reasonable cost. ... “The demand for faster software development has become a serious industry threat, increasing code vulnerabilities and leading to avoidable security risks. This relentless development pace is unsustainable and only being accelerated by Generative AI. The more we speed up development and release cycles with GenAI and otherwise, the more code vulnerabilities are introduced, giving attackers more opportunities to execute their missions. ... “AI is poised to become a foundational business tool, joining virtualization, cloud computing, and containerization as essential layers of modern infrastructure. By 2025, startups and enterprises will routinely leverage AI for tasks like security, audits, and cost management. 


AI and cybersecurity: A double-edged sword

How exactly is AI tipping the scales in favor of cybersecurity professionals? For starters, it’s revolutionizing threat detection and response. AI systems can analyze vast amounts of data in real time, identifying potential threats with speed and accuracy. Companies like CrowdStrike have documented that their AI-driven systems can detect threats in under one second. But AI’s capabilities don’t stop at detection. When it comes to incident response, AI is proving to be a game-changer. Imagine a security system that doesn’t just alert you to a threat but takes immediate action to neutralize it. That’s the potential of AI-driven automated incident response. From isolating compromised systems to blocking malicious IP addresses, AI can execute these critical tasks swiftly and without human input, dramatically reducing response times and minimizing potential damage. ... AI is not just changing the skill set required for cybersecurity professionals, it’s augmenting it for the better. The ability to work alongside AI systems, interpret their outputs, and make strategic decisions based on AI-generated insights will be paramount for both users and experts. While AI is improving at its cybersecurity capabilities, a human paired with an AI tool will outperform AI by itself ten-fold.



Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein

Daily Tech Digest - January 20, 2025

Robots get their ‘ChatGPT moment’

Nvidia implies that Cosmos will usher in a “ChatGPT moment” for robotics. The company means that, just as the basic technology of neural networks existed for many years, Google’s Transformer model enabled radically accelerated training that led to LLM chatbots like ChatGPT. In the more familiar world of LLMs, we’ve come to understand the relationship between the size of the data sets used for training these models and the speed of that training and their resulting performance and accuracy. ... Driving in the real world with a person as backup is time-consuming, expensive, and sometimes dangerous — especially when you consider that autonomous vehicles need to be trained to respond to dangerous situations. Using Cosmos to train autonomous vehicles would involve the rapid creation of huge numbers of simulated scenarios. For example, imagine the simulation of every kind of animal that could conceivably cross a road — bears, dear, dogs, cats, lizards, etc. — in tens of thousands of different weather and lighting conditions. By the end of all this training, the car’s digital twin in Omniverse would be able to recognize and navigate scenarios of animals on the road regardless of the animal and the weather or time of day. That learning would then be transferred to thousands of real cars, which would also know how to navigate those situations.


How to Use AI in Cyber Deception

Adaptation is one of the most significant ways AI improves honey-potting strategies. Machine learning subsets can evolve alongside bad actors, enabling them to anticipate novel techniques. Conventional signature-based detection methods are less effective because they can only flag known attack patterns. Algorithms, on the other hand, use a behavior-based approach. Synthetic data generation is another one of AI’s strengths. This technology can produce honeytokens — digital artifacts purpose-built for deceiving would-be attackers. For example, it could create bogus credentials and a fake database. Any attempt to use those during login can be categorized as malicious because it means they used illegitimate means to gain access and exfiltrate the imitation data. While algorithms can produce an entirely synthetic dataset, they can also add certain characters or symbols to existing, legitimate information to make its copy more convincing. Depending on the sham credentials’ uniqueness, there’s little to no chance of false positives. Minimizing false positives is essential since most of the tens of thousands of security alerts professionals receive daily are inaccurate. This figure may be even higher for medium- to large-sized enterprises using conventional behavior-based scanners or intrusion detection systems because they’re often inaccurate.


How organizations can secure their AI code

Organizations also expose themselves to risks when developers download machine learning (ML) models or datasets from platforms like Hugging Face. “In spite of security checks on both ends, it may still happen that the model contains a backdoor that becomes active once the model is integrated,” says Alex Ștefănescu, open-source developer at the Organized Crime and Corruption Reporting Project (OCCRP). “This could ultimately lead to data being leaked from the company that used the malicious models.” ... Not all AI-based tools are coming from teams full of software engineers. “We see a lot of adoption being driven by data analysts, marketing teams, researchers, etc. within organizations,” Meyer says. These teams aren’t traditionally developing their own software but are increasingly writing simple tools that adopt AI libraries and models, so they’re often not aware of the risks involved. “This combination of shadow engineering with lower-than-average application security awareness can be a breeding ground for risk,” he adds. ... When it comes to securing enough resources to protect AI systems, some stakeholders might hesitate, viewing it as an optional expense rather than a critical investment. “AI adoption is a divisive topic in many organizations, with some leaders and teams being ‘all-in’ on adoption and some being strongly resistant,” Meyer says. 


AI-driven insights transform security preparedness and recovery

IT security teams everywhere are struggling to meet the scale of actions required to ensure IT operational risk remediation from continually evolving threats. Recovering digital operations after an incident requires a proactive system of IT observability, intelligence, and automation. Organizations should first unify visibility across their IT environments, so they can quickly identify and respond to incidents. Additionally, teams need to eliminate data silos to prevent monitoring overload and resolve issues. ... Unfortunately, many companies still lack the foundational elements needed for successful and secure AI adoption. Common challenges include fragmented or low-quality data disperse in multiple silos, lack of coordination, a shortage of specialized talent like data and AI engineers, and the company own culture resistant to change. Fostering a culture of security awareness starts with making security a visible and integral part of everyday operations. IT leaders should focus on equipping employees with actionable insights through tools that simplify complex security issues. Training programs, tailored to different roles, help ensure that teams understand specific threats relevant to their responsibilities. Providing real-time feedback, such as simulated scenarios, builds practical awareness.


AI Is Quietly Steering Your Decisions - Before You Make Them

Agentic AI here is a critical enabler. These systems analyze user data over various modalities, including text, voice and behavioral patterns to predict intentions and influence outcomes. They are more than a handy assistant helping you cross off a to-do list. OpenAI CEO Sam Altman called these agents "AI's killer function," comparing them to "super competent colleagues that know absolutely everything about my whole life - every email, every conversation I've ever had - but don't feel like an extension." And they are everywhere. Microsoft and Google spearheaded chatbot integration into everyday tools, with Microsoft embedding its Bing Chat and AI assistants into Office software and Google enhancing productivity tools such as Workspace with Gemini capabilities. The study cited the example of Meta, which has claimed to achieve human-level play in the game Diplomacy using their AI agent CICERO. The research team behind CICERO, it says, cautions against "the potential danger for conversational AI agents" that "may learn to nudge its conversational partner to achieve a particular objective." Apple's App Intents framework, it explained, has protocols to "predict actions someone might take in the future" and "to suggest the app intent to someone in the future using predictions you [the developer] provide."


Why digital brands investing in AI to replace humans will fail

Despite its strengths, AI cannot (yet) accurately replicate core human qualities such as emotional intelligence, critical thinking, and nuanced judgment. What it can do is automate time consuming, repetitive operations. Rather than attempting to replace human workers, forward-thinking organisations should encourage the power of human-AI collaboration. By approaching AI this way, brands can respond to customers digital problems faster, meaning employees can use the time gained to direct their efforts to complex problem-solving, strategic planning and customer relations. Those that adopt a hybrid approach, to find the optimal balance between AI and human insight, will be most successful. The collaboration between AI-powered tools and human intelligence creates a powerful combination that can strengthen performance, drive innovation, and help deliver a better overall customer experience. ... On the other hand, businesses that are looking to replace workers, and eventually rely solely on AI-generated operations, risk losing the genuine human touch. This loss of authenticity has the potential to alienate customers, leaving them to feel that their experiences with digital brands are insincere and mechanical. 


From devops to CTO: 5 things to start doing now

If you want to be recognized for promotions and greater responsibilities, the first place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. ... One of the bigger challenges for engineers when taking on larger technical responsibilities is shifting their mindset from getting work done today to deciding what work to prioritize and influencing longer-term implementation decisions. Instead of developing immediate solutions, the path to CTO requires planning architecture, establishing governance, and influencing teams to adopt self-organizing standards. ... “If devops professionals want to be considered for the role of CTO, they need to take the time to master a wide range of skills,” says Alok Uniyal, SVP and head of IT process consulting practice at Infosys. “You cannot become a CTO without understanding areas such as enterprise architecture, core software engineering and operations, fostering tech innovation, the company’s business, and technology’s role in driving business value. Showing leadership that you understand all technology workstreams at a company as well as key tech trends and innovations in the industry is critical for CTO consideration.”


The Human Touch in Tech: Why Local IT Support Remains Essential

While AI can handle common issues, complex or unforeseen problems often require creative solutions and in-depth technical expertise. Call center agents, with limited access to resources — and often operating under strict protocols — may be unable to depart from standardized procedures, even when doing so might be beneficial. The collaborative, adaptable problem-solving approach of a skilled, experienced IT technician is often the key to resolving these intricate challenges. Many IT issues require physical intervention and hands-on troubleshooting. Remote support, though helpful, can't always address hardware problems, network configurations, or security breaches that require on-site assessment and repair. Local IT support companies offering on-site visits have a clear advantage in addressing these types of issues efficiently and effectively. ... Local providers often possess a wide range of skills and experience, allowing them to handle a broader spectrum of issues. Their ability to think creatively and collaboratively enables them to address complex problems that may stump call center agents or AI systems. Furthermore, their local presence allows for swift on-site responses to critical situations.


Six ways to reduce cloud database costs without sacrificing performance

Automate data archiving or deletion for unused or outdated records. Use lifecycle policies to move logs older than specific days to cheaper storage or delete them. TTL (Time to Live) is an easier way to perform such data lifecycle. TTL refers to a setting that defines the lifespan of a piece of data (e.g., a record or document) in the database. After the specified TTL expires, the data is automatically deleted or marked for deletion by the database. ... The advantage of consolidating multiple applications to one single database results in fewer instances, hence reducing costs for compute and storage, enabling efficient resource utilisation when workloads have similar usage patterns. The Implementation can follow schema-based isolation where separate schemas for each tenant can be implemented & row-level isolation where a tenant ID column can be used to segment data within tables One example is to host a SaaS platform for multiple customers on a single database instance with logical partitions. ... Creating copies of specific data items can enhance read performance by reducing costly operations. In an e-commerce store example, you’d typically have separate tables for customers, products, and orders. Retrieving one customer’s order history would involve a query that joins the order table with the customer table and product table.


AI, IoT, and cybersecurity are at the heart of our innovation: Sharat Sinha, Airtel Business

At Airtel Business, we understand that cybersecurity is a growing concern for Indian enterprises. With cyberattacks in India projected to reach one trillion per year by 2033, businesses need robust solutions to safeguard their digital assets. That’s where Airtel Secure Internet and Airtel Secure Digital Internet come in. Airtel Secure Internet, in collaboration with Fortinet, provides comprehensive end-to-end protection by integrating Fortinet’s advanced firewall with Airtel’s high-speed Internet Leased Line (ILL). This solution offers 24/7 monitoring, real-time threat detection, and automated mitigation, all powered by Airtel’s Security Operations Centre (SOC) and Fortinet’s SOAR platform. It ensures businesses are protected from a range of cyberthreats while optimising operational efficiency, without the need for large capital investments in security infrastructure. In addition, Airtel Secure Digital Internet, in partnership with Zscaler, uses Zero Trust Architecture (ZTA) to continuously validate user, device, and network interactions. Combining Zscaler’s cloud security with Security Service Edge (SSE) technology, this solution ensures secure cloud access, SSL inspection, and centralised policy enforcement, helping businesses reduce attack surfaces and simplify security management. 



Quote for the day:

"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan

Daily Tech Digest - January 19, 2025

Service as Software: How AI Agents Are Transforming SaaS

SaaS empowered users across industries by providing the tools and intelligence to make informed decisions. But it has always stopped short of execution. Lawyers, radiologists, tax consultants, and other service providers rely on SaaS to make decisions, but they remain responsible for the last-mile activity. Service as Software closes this gap. Agents powered by capable LLMs and integrated with existing APIs — and even SaaS platforms — don’t just inform users, they take action on their behalf. Instead of providing tools for human service providers, Service as Software directly delivers outcomes. This transformation is more than technological — it’s economic. ... Enterprises considering transitioning from SaaS to Service as Software often begin by examining which tasks would yield the most value from automation. These tasks are typically repetitive, time-sensitive, or error-prone when conducted manually. Introducing an intelligent agent that can monitor data streams, evaluate decision rules and initiate final actions may require augmenting existing infrastructure — for instance, adding webhooks, implementing new API endpoints, or integrating a rules engine.


Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Perhaps the most dangerous aspect of anthropomorphizing AI is how it masks the fundamental differences between human and machine intelligence. While some AI systems excel at specific types of reasoning and analytical tasks, the large language models (LLMs) that dominate today’s AI discourse — and that we focus on here — operate through sophisticated pattern recognition. These systems process vast amounts of data, identifying and learning statistical relationships between words, phrases, images and other inputs to predict what should come next in a sequence. When we say they “learn,” we’re describing a process of mathematical optimization that helps them make increasingly accurate predictions based on their training data. ... One critical area where anthropomorphizing creates risk is content generation and copyright compliance. When businesses view AI as capable of “learning” like humans, they might incorrectly assume that AI-generated content is automatically free from copyright concerns. ... One of the most concerning costs is the emotional toll of anthropomorphizing AI. We see increasing instances of people forming emotional attachments to AI chatbots, treating them as friends or confidants.


Building Secure Software - Integrating Security in Every Phase of the SDLC

A common problem in software development is that security related activities are left out or deferred until the final testing phase, which is too late in the SDLC after most of the critical design and implementation has been completed. Besides, the security checks performed during the testing phase can be superficial, limited to scanning and penetration testing, which might not reveal more complex security issues. By adopting shift left principle, teams are able to detect and fix security flaws early on, save money that would otherwise be spent on a costly rework, and have a better chance of avoiding delays going into production. Integrating security into SDLC should look like weaving rather than stacking. There is no “security phase,” but rather a set of best practices and tools that should be included within the existing phases of the SDLC. A Secure SDLC requires adding security review and testing at each software development stage, from design, to development, to deployment and beyond. From initial planning to deployment and maintenance, embedding security practices ensures the creation of robust and resilient software. 


Making AI greener starts with smarter data center design

There’s been a lot of talk about the off-grid energy investments of hyperscalers. But the energy efficiency of AI infrastructure also has a big role to play. Nokia provides networking connectivity inside and between data centers, as well as between end users and data center applications. Understanding this intricate web is important as it’s not just about making the processes inside a data center faster and more efficient. It’s about making the entire journey between somebody making an AI request—and getting back a response—quick, secure, and more energy efficient. ... Energy, performance, and cost considerations may prompt some cloud providers to build their data centers in remote locations with access to clean energy, passive cooling, and cheaper and more plentiful real estate. However, data sovereignty laws, security concerns, and the ultra-low latency requirements of industrial applications may see a move toward more distributed cloud computing, with AI workloads moving closer to the end user. This would likely lead to more regional, metropolitan, and edge data centers, with some businesses and organizations opting for on-site data centers for mission-critical functions.
We may, in fact, see both trends at the same time. 


Employees Enter Sensitive Data Into GenAI Prompts Far Too Often

"Utilizing AI for the sake of using AI is destined to fail," said Kris Bondi, CEO and co-founder of Mimoto, in an emailed statement to Dark Reading. "Even if it gets fully implemented, if it isn't serving an established need, it will lose support when budgets are eventually cut or reappropriated." Though Kowski believes that not incorporating GenAI is risky, success can still be achieved, he notes. "Success without AI is still achievable if a company has a compelling value proposition and strong business model, particularly in sectors like engineering, agriculture, healthcare, or local services where non-AI solutions often have greater impact," he said. If organizations do want to pursue incorporating GenAI tools but want to mitigate the high risks that come along with it, the researchers at Harmonic have recommendations on how to best approach this. The first is to move beyond "block strategies" and implement effective AI governance, including deploying systems to track input into GenAI tools in real time, identifying what plans are in use and ensuring that employees are using paid plans for their work and not plans that use inputted data to train systems, gaining full visibility over these tools, sensitive data classification, creating and enforcing workflows, and training employees on best practices and risks of responsible GenAI use.


What is Blue Ocean Strategy? 3 Key Ways to Build a Business in an Uncontested Market

One of the biggest surprises in tackling a neglected market segment is realizing that your future customers might not even know they need you. They may sense a vague discomfort or carry a subconscious worry, but they haven't articulated the problem in a way that translates into action. In my field, most people didn't fully appreciate how complex certain end-of-life tasks could become — until they found themselves in the middle of a crisis they never prepared for. Simply presenting a solution and hoping people will connect the dots doesn't work when the underlying problem is hidden or poorly understood. Education became my most potent tool. ... Building momentum in a market with no clear precedent means learning to paddle in still waters. I needed to constantly fine-tune the product based on authentic customer feedback, invest the time and effort to educate potential users so they could recognize the value of what I was offering, and craft a holistic experience that viewed their challenges from multiple angles. These three strategies became the bedrock of my approach to Blue Ocean markets. 


Secure AI? Dream on, says AI red team

The first step in an AI red teaming operation is to determine which vulnerabilities to target, they said. They suggest: “starting from potential downstream impacts, rather than attack strategies, makes it more likely that an operation will produce useful findings tied to real world risks. After these impacts have been identified, red teams can work backwards and outline the various paths that an adversary could take to achieve them.” ... The two, authors said, are distinct yet “both useful and can even be complimentary. In particular, benchmarks make it easy to compare the performance of multiple models on a common dataset. AI red teaming requires much more human effort but can discover novel categories of harm and probe for contextualized risks.” ... The bottom line here: RAI harms are more ambiguous than security vulnerabilities and it all has to do with “fundamental differences between AI systems and traditional software.” Most AI safety research, the authors noted, focus on adversarial users who deliberately break guardrails, when in truth, they maintained, benign users who accidentally generate harmful content are as or more important.


New AI Architectures Could Revolutionize Large Language Models

For context, transformer architecture, the technology which gave ChatGPT the 'T' in its name, is designed for sequence-to-sequence tasks such as language modeling, translation, and image processing. Transformers rely on “attention mechanisms,” or tools to understand how important a concept is depending on a context, to model dependencies between input tokens, enabling them to process data in parallel rather than sequentially like so-called recurrent neural networks—the dominant technology in AI before transformers appeared. This technology gave models context understanding and marked a before and after moment in AI development. ... Google Research's Titans architecture takes a different approach to improving AI adaptability. Instead of modifying how models process information, Titans focuses on changing how they store and access it. The architecture introduces a neural long-term memory module that learns to memorize at test time, similar to how human memory works. ... Overall, the era of AI companies bragging over the sheer size of their models may soon be a relic of the past. If this new generation of neural networks gains traction, then future models won’t need to rely on massive scales to achieve greater versatility and performance.


How to Leverage Network Segmentation for Hospitality Sector PCI SSF Compliance

Network segmentation is the process of dividing a computer network into isolated segments or subnetworks, with each segment protected by security controls like firewalls and access restrictions. Specifically, each segment is separated by firewalls or other security measures, effectively restricting traffic flow between segments. Thus, this isolation helps contain potential security breaches, hence preventing them from spreading across the entire network. ... In the context of PCI SSF compliance, network segmentation can help hospitality businesses protect sensitive payment card data. It does so by limiting access to this data. By isolating the Cardholder Data Environment (CDE) from the rest of the network, organizations can reduce the scope of PCI SSF compliance. This also enhances their overall security posture. ... By isolating sensitive data, network segmentation reduces the risk of unauthorized access and data breaches. It creates multiple layers of defense, making it more difficult for attackers to reach critical systems. This approach also limits the lateral movement of threats, ensuring that a compromised system does not jeopardize the entire network.


Overcoming Key Challenges in an AI-Centric Future

Much has been made of AI and its potential dangers in the hands of attackers. It’s true—with the help of AI, launching an attack has never been easier, and it’s likely just a matter of time until we witness a significant AI-driven breach. That said, all is not lost. AI-specific security controls are already beginning to emerge, and as AI becomes more commonplace, newer and more advanced solutions will continue to emerge in the near future. ... Regulations almost always lag behind innovation, and AI is no exception. While a handful of AI regulations have begun to emerge around the world, most organizations are currently taking matters into their own hands by implementing dedicated AI polices to evaluate and control the AI services they use. Right now, those initiatives are focused primarily on maintaining data privacy and preventing AI from making critical errors. These AI safety standards will continue to evolve and will likely be integrated into existing security frameworks, including those put out by independent advisory bodies. Regulators will almost certainly maintain a strong focus on ethical considerations, creating guidelines that help define acceptable and responsible use cases for AI capabilities.



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - January 18, 2025

Beyond RAG: How cache-augmented generation reduces latency, complexity for smaller workloads

RAG is an effective method for handling open-domain questions and specialized tasks. It uses retrieval algorithms to gather documents that are relevant to the request and adds context to enable the LLM to craft more accurate responses. ... First, advanced caching techniques are making it faster and cheaper to process prompt templates. The premise of CAG is that the knowledge documents will be included in every prompt sent to the model. Therefore, you can compute the attention values of their tokens in advance instead of doing so when receiving requests. This upfront computation reduces the time it takes to process user requests. Leading LLM providers such as OpenAI, Anthropic and Google provide prompt caching features for the repetitive parts of your prompt, which can include the knowledge documents and instructions that you insert at the beginning of your prompt. ... And finally, advanced training methods are enabling models to do better retrieval, reasoning and question-answering on very long sequences. In the past year, researchers have developed several LLM benchmarks for long-sequence tasks, including BABILong, LongICLBench, and RULER. These benchmarks test LLMs on hard problems such as multiple retrieval and multi-hop question-answering. 


Turning Curiosity into a Career: The Power of OSINT

The beauty of OSINT is that you can start learning and practicing right now, even without a formal background in cybersecurity. Begin by familiarizing yourself with publicly available tools and resources. Social media platforms, search engines and public record databases are great starting points. From there, you can explore specialized tools like Google Dorking for advanced searches, reverse image search for photo analysis, and platforms like Maltego or SpiderFoot for more in-depth investigations. The OSINT Framework provides an extensive list of tools. If you're interested in pursuing OSINT as a career, consider taking advantage of free and paid online courses. Certifications such a GIAC Open Source Intelligence (GOSI) or Certified Ethical Hacker (CEH) can help build your credibility in the field. Participating in OSINT challenges or contributing to community projects is also a great way to hone your skills and showcase your abilities to potential employers. The demand for OSINT skills is growing as technology evolves and data becomes more accessible. Artificial intelligence and machine learning are enhancing OSINT capabilities, making it easier to analyze massive datasets and detect patterns. 


Five Trends That Will Drive Software Development in 2025

While organizations worldwide have quickly adopted AI for software development, many still struggle to measure its impact across diverse teams and business functions. Next year, organizations will become more sophisticated about measuring the return on their AI investments and better understand the value this technology can provide. This starts with looking more closely at specific outcomes. Instead of asking a broad question like, ‘How is AI helping my organization?’ leaders should study the impact of AI on tasks, such as test generation, documentation or language translation, and measure the gains in efficiency and productivity for these activities. ... While developers already work at breakneck speed today, technical debt is a persistent issue. The most worrying consequence of this debt is vulnerabilities that can creep into code and go unnoticed or unfixed. Next year, developers will expand their use of AI in software development to significantly reduce technical debt and increase the security of their code. Technical debt often occurs when developers choose an easy or quick solution instead of a better approach that takes longer. Vulnerabilities result when the code is poorly structured, not sufficiently reviewed or when testing is rushed or incomplete.


A Cloud Architect’s Guide to E-Commerce Data Storage

Latency, measured in microseconds, is the enemy of e-commerce storage systems, as slow-performing systems can mean hundreds of thousands of dollars in lost transactions and abandoned shopping carts. Your data platform must be reliable and highly performant even during fluctuating demand; events like Black Friday or unexpected social media trends can put a heavy load on your systems. Infrastructure that supports real-time data processing can be the deciding factor in staying competitive. These challenges necessitate a modern approach to storage — one that is software-defined, scalable and cloud-ready. ... Foundational elements of a modern e-commerce infrastructure consist of software-defined storage often combined with open-source environments like OpenStack, OpenShift, KVM and Kubernetes. The challenge for platform architects, whether building their e-commerce storage platform on premises or in the cloud, is to achieve scale and flexibility without compromising application and site performance. Many legacy storage systems, especially those architected for spinning disks, have performance limitations, resulting in data silos and expensive and time-consuming scaling strategies.


Demand and Supply Issues May Impact AI in 2025

Executives are asking for ROI numbers on analytics, data governance, and data quality programs, and they are demanding dollar values as opposed to “improving customer experience” or “increasing operational efficiency. ... Organizations have expected quick returns but not realized them because the initial expectations were unrealistic. Later comes the realization that the proper foundation has not been put in place. “Folks are saying they expect ROI in at least three years and more than 30% or so are saying that it would take three to five years when we’ve got two years of generative AI. [H]ow can you expect it to perform so quickly when you think it will take at least three years to realize the ROI? Some companies, some leadership, might be freaking out at this moment,” says Chaurasia. “I think the majority of them have spent half a million on generative AI in the last two years and haven’t gotten anything in return. That's where the panic is setting in.” Explaining ROI in terms of dollars is difficult, because it’s not as easy as multiplying time savings by individual salaries. Some companies are working to develop frameworks, however. ... If enterprises are reducing AI investments because the anticipated benefits aren’t being realized, vendors will pull back. 


4 Strategies To Thrive In A Manager-Less Workplace

One of the most important skills you can build is emotional regulation. Work can be intense, often frustrating. It’s easy to get caught up in your own emotions and—since emotions are catching—other people’s as well. Staying even-keeled pays off in maintaining good relationships with peers and also keeping yourself clear-headed so you can problem-solve when things go wrong. You can work on your emotional self-control by learning the tools of journaling and mindfulness. ... When you communicate powerfully, you navigate more easily. You get what you need more efficiently, you sell your ideas, and you build better relationships. All of these outcomes are useful when you’re on your own to build a case for getting promoted. The best way to build these skills is to practice. Volunteer to give large presentations and ask for feedback. Craft your emails and slack messages with an understanding of the receiver and ask them if they have suggestions for you. ... Your network inside your company can also provide the emotional support you would have gotten from your manager. And, when it comes time for you to be promoted, in most companies you need your colleagues to support you. Look around at your coworkers to see who are the most interesting, plugged-in, or effective. 


Dark Data: Recovering the Lost Opportunities

Dark data is the data collected and stored by an organization but is not analyzed or used for any essential purpose. It is frequently referred to as "data that lies in the shadows" because it is not actively used or essential in decision-making processes. ... Dark data can be highly beneficial to businesses as it offers insights and business intelligence that wouldn't be available otherwise. Companies that analyze dark data can better understand their customers, operations, and market trends. This enables them to make the best decisions and improve overall performance. Dark data can help organizations recoup lost opportunities by uncovering previously unknown patterns and trends. ... Once the dark data has been collected, it must be cleansed before further analysis. This may include deleting duplicate data, correcting errors, and formatting information to make it easier to work with. After the data has been cleansed and categorized, it can be examined to reveal patterns and insights that will aid decision-making. ... Collaborating with cross-functional teams, such as IT, data science, and business divisions, can assist in guaranteeing that dark data is studied in light of the organization's broader goals and objectives. 
The difference between “data deletion” and “data destruction” is critical to understand. “Data deletion” simply means removing a file from a system, making it appear inaccessible, while “data destruction” is a more thorough process that permanently erases data from a storage device, making it completely irretrievable. Deleting data isn’t enough. Without proper destruction protocols, “deleted” data remains vulnerable to breaches, regulatory compliance, and data recovery tools. ... A well-defined data destruction policy is your organization’s first line of defense. It outlines when, how, and under what circumstances data should be destroyed. Without a formal policy, data is often overlooked, forgotten, or destroyed haphazardly, creating compliance and security risks. To implement this, start by identifying the types of data your organization collects and classifies, such as PII or proprietary records. Define clear retention periods based on regulatory requirements like GDPR or CCPA and document the necessary steps, tools, and roles for secure destruction. Assign accountability to ensure oversight and follow-through. A formal policy isn’t just a “nice-to-have.” It’s a compliance requirement for many regulations, including GDPR and CCPA. 


Can GenAI Restore the ‘Humanity’ in Banking that Digital Has Removed?

Abbott is not arguing for turning customers directly over to GenAI — not yet. Even the most-advanced pioneers his firm works with aren’t risking that. ... Abbott believes GenAI, as it becomes a standard part of banking, will play out in a similar way. Employees will adapt, often more slowly than anticipated, but they will change. This will lead to shifts in the role of management vis-à-vis employees empowered by GenAI. Abbott says this will likely take a similar path to that seen as banks adopted agile development. Young people came into the bank using the tools, just as many are already experimenting with GenAI. Banking leaders liked the idea of their organizations "doing agile." But what Abbott calls "the frozen middle" management tier had to grin and plunge into unfamiliar turf. "That frozen middle will have to thaw out and find a new way of working," says Abbott. Bank leadership must help by providing tools and opportunities for trying it out. One of the biggest early challenges will be tempering the GenAI tech to the task. Abbott explains that GenAI can be tuned to be "low temperature" or "high temperature," or somewhere in between. The former refers to GenAI working with tight guardrails, such as in sensitive areas like dispute management. 


Federated learning: The killer use case for generative AI

Federated learning is emerging as a game-changing approach for enterprises looking to leverage the power of LLMs while maintaining data privacy and security. Rather than moving sensitive data to LLM providers or building isolated small language models (SLMs), federated learning enables organizations to train LLMs using their private data where it resides. Everyone who worries about moving private enterprise data to a public space, such as uploading it to an LLM, can continue to have “private data.” Private data may exist on a public cloud provider or in your data center. The real power of federation comes from the tight integration between private enterprise data and sophisticated LLM capabilities. This integration allows companies to leverage their proprietary information and broader knowledge in models like GPT-4 or Google Gemini without compromising security. ... As enterprises struggle to balance AI capabilities against data privacy concerns, federated learning provides the best of both worlds. Also, it allows for a choice of LLMs. You can leverage LLMs that are not a current part of your ecosystem but may be a better fit for your specific application. For instance, LLMs that focus on specific verticals are becoming more popular. 



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest January 17, 2025

The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and a tools plane. See the graphic below. The tools plane is a collection of APIs, stored procedures and external web calls to business partners. However, for organizations that have started their AI journey, it could also include calls to traditional machine learning models (wave No. 1) and LLMs (wave No. 2) operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs with full knowledge of an organization’s tools plane and allow them to build and execute the logic needed for the control plane. This can be done by providing a “few-shot” prompt to an LLM that has been fine-tuned on an organization’s tools plane. Below is an example of a “few-shot” prompt that answers the same hypothetical question presented earlier. This is also known as letting the LLM think slowly. ... If agentic AI still seems to be made up of too much magic, then consider the simple example below. Every developer who has to write code daily probably asks an LLM a question similar to the one below. ... Agentic AI is the next logical evolution of AI. It is based on capabilities with a solid footing in AI’s first and second waves. The promise is the use of AI to solve more complex problems by allowing them to plan, execute tasks and revise— in other words, allowing them to think slowly. This also promises to produce more accurate responses.


AI datacenters putting zero emissions promises out of reach

Datacenters' use of water and land are other bones of contention, which in combination with their reliance on tax breaks and the limited number of local jobs they deliver, will see them face growing opposition from local residents and environmental groups. Uptime highlights that many governments have set targets for GHG emissions to become net-zero by a set date, but warns that because the AI boom look set to test power availability, it will almost certainly put these pledges out of reach. ... Many governments seem convinced of the economic benefits promised by AI at the expense of other concerns, the report notes. The UK is a prime example, this week publishing the AI Opportunities Action Plan and vowing to relax planning rules to prioritize datacenter builds. ... Increasing rack power presents several challenges, the report warns, including the sheer space taken up by power distribution infrastructure such as switchboards, UPS systems, distribution boards, and batteries. Without changes to the power architecture, many datacenters risk becoming an electrical plant built around a relatively small IT room. Solving this will call for changes such as medium-voltage (over 1 kV) distribution to the IT space and novel power distribution topologies. However, this overhaul will take time to unfold, with 2025 potentially a pivotal year for investment to make this possible.


State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full passkey-readiness of devices, operating systems and browsers. Apple (iOS, macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have fully integrated passkey support across their platforms: Over 95 percent of all iOS & Android devices are passkey-ready; and Over 90 percent of all iOS & Android devices have passkey functionality enabled. With Windows soon supporting synced passkeys, all major operating systems ensure users can securely and effortlessly access their credentials across devices. ... With full device support, a polished UX, growing user familiarity, and a proven track record among early adopter implementations, there’s no reason for businesses to delay adopting passkeys. The business advantages of passkeys are compelling. Companies that previously relied on SMS-based authentication can save considerably on SMS costs. Beyond that, enterprises adopting passkeys benefit from reduced support overhead (since fewer password resets are needed), lower risk of breaches (thanks to phishing-resistance), and optimized user flows that improve conversion rates. Collectively, these perks make a convincing business case for passkeys.


Balancing usability and security in the fight against identity-based attacks

AI and ML are a double-edged sword in cybersecurity. On one hand, cybercriminals are using these technologies to make their attacks faster and wiser. They can create highly convincing phishing emails, generate deepfake content, and even find ways to bypass traditional security measures. For example, generative AI can craft emails or videos that look almost real, tricking people into falling for scams. On the flip side, AI and ML are also helping defenders. These technologies allow security systems to quickly analyze vast amounts of data, spotting unusual behavior that might indicate compromised credentials. ... Targeted security training can be useful but generally you want to reduce the human dependency as much as possible. This is why controls that can meet a user where they are at is critical. If you can deliver point-in-time guidance, or straight up technically prevent something like a user entering their password into a phishing site, it significantly reduces the dependency on the human to make the right decision unassisted every time. When you consider how hard it can be for even security professionals to spot the more sophisticated phishing sites, it’s essential that we help people out as much as possible with technical controls.


Understanding Leaderless Replication for Distributed Data

Leaderless replication is another fundamental replication approach for distributed systems. It alleviates problems of multi-leader replication while, at the same time, it introduces its own problems. Write conflicts in multi-leader replication are tackled in leaderless replication with quorum-based writes and systematic conflict resolution. Cascading failures, synchronization overhead, and operational complexity can be handled in leaderless replication via its decentralized architecture. Removing leaders can simplify cluster management, failure handling,g and recovery mechanisms. Any replica can handle writes/reads. ... Direct writes, and coordination-based replication are the most common approaches in leaderless replication. In the first approach, clients write directly to node replicas, while in the second approach, there exist coordinator-mediated writes. It is worth mentioning that, unlike the leader-follower concept, coordinators in leaderless replication do not enforce a particular ordering of writes. ... Failure handling is one of the most challenging aspects of both approaches. While direct writes provide better theoretical availability, they can be problematic during failure scenarios. Coordinator-based systems can provide clearer failure semantics but at the cost of potential coordinator bottlenecks.


Blockchain in Banking: Use Cases and Examples

Bitcoin has entered a space usually reserved for gold and sovereign bonds: national reserves. While the U.S. Federal Reserve maintains that it cannot hold Bitcoin under current regulations, other financial systems are paying close attention to its potential role as a store of value. On the global stage, Bitcoin is being viewed not just as a speculative asset but as a hedge against inflation and currency volatility. Governments are now debating whether digital assets can sit alongside gold bars in their vaults. Behind all this activity lies blockchain - providing transparency, security, and a framework for something as ambitious as a digital reserve currency. ... Financial assets like real estate, investment funds, or fine art are traditionally expensive, hard to divide, and slow to transfer. Blockchain changes this by converting these assets into digital tokens, enabling fractional ownership and simplifying transactions. UBS launched its first tokenized fund on the Ethereum blockchain, allowing investors to trade fund shares as digital assets. This approach reduces administrative costs, accelerates settlements, and improves accessibility for investors. Additionally, one of Central and Eastern Europe’s largest banks has tokenized fine art on Aleph Zero blockchain. This enables fractional ownership of valuable art pieces while maintaining verifiable proof of ownership and authenticity.


Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across multiple edge devices. This approach eliminates the need to transfer raw data to a central server, preserving privacy and reducing bandwidth consumption. Models are trained locally, with only aggregated updates shared to improve the global system. ... Localized data processing empowers edge devices to conduct real-time analytics, facilitating faster decision-making and minimizing reliance on central frameworks. This capability is fundamental for applications such as autonomous vehicles and industrial automation, where even milliseconds can be vital. ... Blockchain technology is pivotal in decentralized AI for edge computing by providing a secure, immutable ledger for data sharing and task execution across edge nodes. It ensures transparency and trust in resource allocation, model updates, and data verification processes. ... By processing data directly at the edge, decentralized AI removes the delays in sending data to and from centralized servers. This capability ensures faster response times, enabling near-instantaneous decision-making in critical real-time applications. ... Decentralized AI improves privacy protocols by empowering the processing of sensitive information locally on the device rather than sending it to external servers.


The Myth of Machine Learning Reproducibility and Randomness

The nature of ML systems contributes to the challenge of reproducibility. ML components implement statistical models that provide predictions about some input, such as whether an image is a tank or a car. But it is difficult to provide guarantees about these predictions. As a result, guarantees about the resulting probabilistic distributions are often given only in limits, that is, as distributions across a growing sample. These outputs can also be described by calibration scores and statistical coverage, such as, “We expect the true value of the parameter to be in the range [0.81, 0.85] 95 percent of the time.” ... There are two basic techniques we can use to manage reproducibility. First, we control the seeds for every randomizer used. In practice there may be many. Second, we need a way to tell the system to serialize the training process executed across concurrent and distributed resources. Both approaches require the platform provider to include this sort of support. ... Despite the importance of these exact reproducibility modes, they should not be enabled during production. Engineering and testing should use these configurations for setup, debugging and reference tests, but not during final development or operational testing.


The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted in sophisticated cyber-attacks. These attacks, often aimed at causing irreversible physical damage to critical engineering assets, highlight the risks of interconnected and digitized systems. Recent incidents like TRISIS, CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber threats from mere nuisances to potentially catastrophic events, orchestrated by state-sponsored groups and cybercriminals. These actors target not just financial gains but also disruptive outcomes and acts of warfare, blending cyber and physical attacks. Additionally, human-operated Ransomware and targeted ICS/OT ransomware pose concerns being on the rise in recent times. ... Traditional IT security measures, when applied to ICS/OT environments, can provide a false sense of security and disrupt engineering operations and safety. Thus, it is important to consider and prioritize the SANS Five ICS Cybersecurity Critical Controls. This freely available whitepaper sets forth the five most relevant critical controls for an ICS/OT cybersecurity strategy that can flex to an organization's risk model and provides guidance for implementing them.


Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and current employee skills, blending AI tools with hands-on evaluations, according to Monahan. AI-powered platforms are being used to match candidates with roles based on their skills, certifications, and experience. “Our platform has done this for years, and our new UMA (Upwork’s Mindful AI) enhances this process,” she said. Gartner, however, warned that “rapid skills evolutions can threaten quality of hire, as recruiters struggle to ensure their assessment processes are keeping pace with changing skills. Meanwhile, skills shortages place more weight on new hires being the right hires, as finding replacement talent becomes increasingly challenging. Robust appraisal of candidate skills is therefore imperative, but too many assessments can lead to candidate fatigue.” ... The shift toward skills-based hiring is further driven by a readiness gap in today’s workforce. Upwork’s research found that only 25% of employees feel prepared to work effectively alongside AI, and even fewer (19%) can proactively leverage AI to solve problems. “As companies navigate these challenges, they’re focusing on hiring based on practical, demonstrated capabilities, ensuring their workforce is agile and equipped to meet the demands of a rapidly evolving business landscape,” Monahan said.



Quote for the day:

“If you set your goals ridiculously high and it’s a failure, you will fail above everyone else’s success.” -- James Cameron