Showing posts with label PAM. Show all posts
Showing posts with label PAM. Show all posts

Daily Tech Digest - February 28, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Microservice Integration Testing a Pain? Try Shadow Testing

Shadow testing is especially useful for microservices with frequent deployments, helping services evolve without breaking dependencies. It validates schema and API changes early, reducing risk before consumer impact. It also assesses performance under real conditions and ensures proper compatibility with third-party services. ... Shadow testing doesn’t replace traditional testing but rather complements it by reducing reliance on fragile integration tests. While unit tests remain essential for validating logic and end-to-end tests catch high-level failures, shadow testing fills the gap of real-world validation without disrupting users. Shadow testing follows a common pattern regardless of environment and has been implemented by tools like Diffy from Twitter/X, which introduced automated-response comparisons to detect discrepancies effectively. ... The environment where shadow testing is performed may vary, providing different benefits. More realistic environments are obviously better:Staging shadow testing — Easier to set up, avoids compliance and data isolation issues, and can use synthetic or anonymized production traffic to validate changes safely. Production shadow testing — Provides the most accurate validation using live traffic but requires safeguards for data handling, compliance and test workload isolation. 


The rising threat of shadow AI

Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Don’t forbid AI—teach people how to use it safely. Indeed, the “ban all tools” approach never works; it lowers morale, causes turnover, and may even create legal or HR issues. The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, we’re creating AI cops. However, don’t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired. Shadow AI is yet another buzzword to track, but also it’s undeniably a growing problem for cloud computing security administrators. 


Can AI live up to its promise?

The debate about truly transformative AI may not be about whether it can think or be conscious like a human, but rather about its ability to perform complex tasks across different domains autonomously and effectively. It is important to recognize that the value and usefulness of machines does not depend on their ability to exactly replicate human thought and cognitive abilities, but rather on their ability to achieve similar or better results through different methods. Although the human brain has inspired much of the development of contemporary AI, it need not be the definitive model for the design of superior AI. Perhaps by freeing the development of AI from strict neural emulation, researchers can explore novel architectures and approaches that optimize different objectives, constraints, and capabilities, potentially overcoming the limitations of human cognition in certain contexts. ... Some human factors that could be stumbling blocks on the road to transformative AI include: the information overload we receive, the possible misalignment with our human values, the possible negative perception we may be acquiring, the view of AI as our competitor, the excessive dependence on human experience, the possible perception of futility of ethics in AI, the loss of trust, overregulation, diluted efforts in research and application, the idea of human obsolescence, or the possibility of an “AI-cracy”, for example.


The end of net neutrality: A wake-up call for a decentralized internet

We live in a time when the true ideals of a free and open internet are under attack. The most recent repeal of net neutrality regulations is taking us toward a more centralized, controlled version of the internet. In this scenario, a decentralized, permissionless internet offers a powerful alternative to today’s reality. Decentralized systems can address the threat of censorship by distributing content across a network of nodes, ensuring that no single entity can block or suppress information. Decentralized physical infrastructure networks (DePIN) demonstrate how decentralized storage can keep data accessible even when network parts are disrupted or taken offline. This censorship resistance is crucial in regions where governments or corporations try to limit free expression online. Decentralization can also cultivate economic democracy by eliminating intermediaries like ISPs and related fees. Blockchain-based platforms allow smaller, newer players to compete with incumbent services and content companies on a level playing field. The Helium network, for example, uses a decentralized model to challenge traditional telecom monopolies with community-driven wireless infrastructure. In a decentralized system, developers don’t need approval from ISPs to launch new services.


Steering by insights: A C-Suite guide to make data work for everyone

With massive volumes of data to make sense of, having reliable and scalable modern data architectures that can organise and store data in a structured, secure, and governed manner while ensuring data reliability and integrity is critical. This is especially true in the hybrid, multi-cloud environment in which companies operate today. Furthermore, as we face a new “AI summer”, executives are experiencing increased pressure to respond to the tsunami of hype around AI and its promise to enhance efficiency and competitive differentiation. This means companies will need to rely on high-quality, verifiable data to implement AI-powered technologies Generative AI and Large Language Models (LLMs) at an enterprise scale. ... Beyond infrastructure, companies in India need to look at ways to create a culture of data. In today’s digital-first organisations, many businesses require real-time analytics to operate efficiently. To enable this, organisations need to create data platforms that are easy to use and equipped with the latest tools and controls so that employees at every level can get their hands on the right data to unlock productivity, saving them valuable time for other strategic priorities. Building a data culture also needs to come from the top; it is imperative to ensure that data is valued and used strategically and consistently to drive decision-making.


The Hidden Cost of Compliance: When Regulations Weaken Security

What might be a bit surprising, however, is one particular pain point that customers in this vertical bring up repeatedly. What is this mysterious pain point? I’m not sure if it has an official name or not, but many people I meet with share with me that they are spending so much time responding to regulatory findings that they hardly have time for anything else. This is troubling to say the least. It may be an uncomfortable discussion to have, but I’d argue that it is long since past the time we as a security community have this discussion. ... The threats enterprises face change and evolve quickly – even rapidly I might say. Regulations often have trouble keeping up with the pace of that change. This means that enterprises are often forced to solve last year’s or even last decade’s problems, rather than the problems that might actually pose a far greater threat to the enterprise. In my opinion, regulatory agencies need to move more quickly to keep pace with the changing threat landscape. ... Regulations are often produced by large, bureaucratic bodies that do not move particularly quickly. This means that if some part of the regulation is ineffective, overly burdensome, impractical, or otherwise needs adjusting, it may take some time before this change happens. In the interim, enterprises have no choice but to comply with something that the regulatory body has already acknowledged needs adjusting.


Why the future of privileged access must include IoT – securing the unseen

The application of PAM to IoT devices brings unique complexities. The vast variety of IoT devices, many of which have been operational for years, often lack built-in security, user interfaces, or associated users. Unlike traditional identity management, which revolves around human credentials, IoT devices rely on keys and certificates, with each device undergoing a complex identity lifecycle over its operational lifespan. Managing these identities across thousands of devices is a resource-intensive task, exacerbated by constrained IT budgets and staff shortages. ... Implementing a PAM solution for IoT involves several steps. Before anything else, organisations need to achieve visibility of their network. Many currently lack this crucial insight, making it difficult to identify vulnerabilities or manage device access effectively. Once this visibility is achieved, organisations must then identify and secure high-risk privileged accounts to prevent them from becoming entry points for attackers. Automated credential management is essential to replace manual password processes, ensuring consistency and reducing oversight. Policies must be enforced to authorise access based on pre-defined rules, guaranteeing secure connections from the outset. Default credentials – a common exploit for attackers – should be updated regularly, and automation can handle this efficiently. 


Understanding the AI Act and its compliance challenges

There is a clear tension between the transparency obligations imposed on providers of certain AI systems under the AI Act and some of their rights and business interests, such as the protection of trade secrets and intellectual property. The EU legislator has expressly recognized this tension, as multiple provisions of the AI Act state that transparency obligations are without prejudice to intellectual property rights. For example, Article 53 of the AI Act, which requires providers of general-purpose AI models to provide certain information to organizations that wish to integrate the model downstream, explicitly calls out the need to observe and protect intellectual property rights and confidential business information or trade secrets. In practice, a good faith effort from all parties will be required to find the appropriate balance between the need for transparency to ensure safe, reliable and trustworthy AI, while protecting the interests of providers that invest significant resources in AI development. ... The AI Act imposes a number of obligations on AI system vendors that will help in-house lawyers in carrying out this diligence. Under Article 13 of the AI Act, vendors of high-risk AI systems are, for example, required to provide sufficient information to (business) deployers to allow them to understand the high-risk AI system’s operation and interpret its output.


Why fast-learning robots are wearing Meta glasses

The technology acts as a sophisticated translator between human and robotic movement. Using mathematical techniques called Gaussian normalization, the system maps the rotations of a human wrist to the precise joint angles of a robot arm, ensuring natural motions get converted into mechanical actions without dangerous exaggerations. This movement translation works alongside a shared visual understanding — both the human demonstrator’s smartglasses and the robot’s cameras feed into the same artificial intelligence program, creating common ground for interpreting objects and environments. ... The EgoMimic researchers didn’t invent the concept of using consumer electronics to train robots. One pioneer in the field, a former healthcare-robot researcher named Dr. Sarah Zhang, has demonstrated 40% improvements in the speed of training healthcare robots using smartphones and digital cameras; they enable nurses to teach robots through gestures, voice commands, and real-time demonstrations instead of complicated programming. This improved robot training is made possible by AI that can learn from fewer examples. A nurse might show a robot how to deliver medications twice, and the robot generalizes the task to handle variations like avoiding obstacles or adjusting schedules. 


Targeted by Ransomware, Middle East Banks Shore Up Security

The financial services industry in UAE — and the Middle East at large — sees cyber wargaming as an important way to identify weaknesses and develop defenses to the latest threats, Jamal Saleh, director general of the UAE Banks Federation, said in a statement announcing the completion of the event. "The rapid adoption and deployment of advanced technologies in the banking and financial sector have increased risks related to transaction security and digital infrastructure," he said in the statement, adding that the sector is increasingly aware "of the importance of such initiatives to enhance cybersecurity systems and ensure a secure and advanced environment for customers, especially with the rapid developments in modern technology and the rise of cybersecurity threats using advanced artificial intelligence (AI) techniques." ... Ransomware remains a major threat to the financial industry, but attackers have shifted from distributed denial-of-service (DDoS) attacks to phishing, data breaches, and identity-focused attacks, according to Shilpi Handa, associate research director for the Middle East, Turkey, and Africa at business intelligence firm IDC. "We see trends such as increased investment in identity and data security, the adoption of integrated security platforms, and a focus on operational technology security in the finance sector," she says. 

Daily Tech Digest - February 05, 2025


Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." --Philippos


Neural Networks – Intuitively and Exhaustively Explained

The process of thinking within the human brain is the result of communication between neurons. You might receive stimulus in the form of something you saw, then that information is propagated to neurons in the brain via electrochemical signals. The first neurons in the brain receive that stimulus, then each neuron may choose whether or not to "fire" based on how much stimulus it received. "Firing", in this case, is a neurons decision to send signals to the neurons it’s connected to. ... Neural networks are, essentially, a mathematically convenient and simplified version of neurons within the brain. A neural network is made up of elements called "perceptrons", which are directly inspired by neurons. ... In AI there are many popular activation functions, but the industry has largely converged on three popular ones: ReLU, Sigmoid, and Softmax, which are used in a variety of different applications. Out of all of them, ReLU is the most common due to its simplicity and ability to generalize to mimic almost any other function. ... One of the fundamental ideas of AI is that you can "train" a model. This is done by asking a neural network (which starts its life as a big pile of random data) to do some task. Then, you somehow update the model based on how the model’s output compares to a known good answer.


Why honeypots deserve a spot in your cybersecurity arsenal

In addition to providing critical threat intelligence for defenders, honeypots can often serve as helpful deception techniques to ensure attackers focus on decoys instead of valuable and critical organizational data and systems. Once malicious activity is identified, defenders can use the findings from the honeypots to look for indicators of compromise (IoC) in other areas of their systems and environments, potentially catching further malicious activity and minimizing the dwell time of attackers. In addition to threat intelligence and attack detection value, honeytokens often have the benefit of having minimal false positives, given they are highly customized decoy resources deployed with the intent of not being accessed. This contrasts with broader security tooling, which often suffers from high rates of false positives from low-fidelity alerts and findings that burden security teams and developers. ... Enterprises need to put some thought into the placement of the honeypots. It is common for them to be used in environments and systems that may be potentially easier for attackers to access, such as publicly exposed endpoints and systems that are internet accessible, as well as internal network environments and systems. The former, of course, is likely to get more interaction and provide broader generic insights. 


IoT Technology: Emerging Trends Impacting Industry And Consumers

An emerging IoT trend is the rise of emotion-aware devices that use sensors and artificial intelligence to detect human emotions through voice, facial expressions or physiological data. For businesses, this opens doors to hyper-personalized customer experiences in industries like retail and healthcare. For consumers, it means more empathetic tech—think stress-relieving smart homes or wearables that detect and respond to anxiety. ... The increasing prevalence of IoT tech means that it is being increasingly deployed into “less connected” environments. As a result, the user experience needs to be adapted so that it’s not wholly dependent on good connectivity—instead, priorities must include how to gracefully handle data gaps and robust fallbacks with missing control instructions. ... IoT systems can now learn user preferences, optimizing everything from home automation to healthcare. For businesses, this means deeper customer engagement and loyalty; for consumers, it translates to more intuitive, seamless interactions that enhance daily life. ... While not a newly emerging trend, the Industrial Internet of Things is an area of focus for manufacturers seeking greater efficiency, productivity and safety. Connecting machines and systems with a centralized work management platform gives manufacturers access to real-time data. 


When digital literacy fails, IT gets the blame

By insisting that requisite digital skills and system education are mastered before a system cutover occurs, the CIO assumes a leadership role in the educational portion of each digital project, even though IT itself may not be doing the training. Where IT should be inserting itself is in the area of system skills training and testing before the system goes live. The dual goals of a successful digital project should be two-fold: a system that’s complete and ready to use; and a workforce that’s skilled and ready to use it. ... IT business analysts, help desk personnel, IT trainers, and technical support personnel all have people-helping and support skills that can contribute to digital education efforts throughout the company. The more support that users have, the more confidence they will gain in new digital systems and business processes — and the more successful the company’s digital initiatives will be. ... Eventually, most of the technical glitches were resolved, and doctors, patients, and support medical personnel learned how to integrate virtual visits with regular physical visits and with the medical record system. By the time the pandemic hit in 2019, telehealth visits were already well under way. These visits worked because the IT was there, the pandemic created an emergency scenario, and, most importantly, doctors, patients, and medical support personnel were already trained on using these systems to best advantage.


What you need to know about developing AI agents

“The success of AI agents requires a foundational platform to handle data integration, effective process automation, and unstructured data management,” says Rich Waldron, co-founder and CEO of Tray.ai. “AI agents can be architected to align with strict data policies and security protocols, which makes them effective for IT teams to drive productivity gains while ensuring compliance.” ... One option for AI agent development comes directly as a service from platform vendors that use your data to enable agent analysis, then provide the APIs to perform transactions. A second option is from low-code or no-code, automation, and data fabric platforms that can offer general-purpose tools for agent development. “A mix of low-code and pro-code tools will be used to build agents, but low-code will dominate since business analysts will be empowered to build their own solutions,” says David Brooks, SVP of Evangelism at Copado. “This will benefit the business through rapid iteration of agents that address critical business needs. Pro coders will use AI agents to build services and integrations that provide agency.” ... Organizations looking to be early adopters in developing AI agents will likely need to review their data management platforms, development tools, and smarter devops processes to enable developing and deploying agents at scale.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. ... Consider a scenario where a company has a privileged Windows account with access to 100 servers. If PAM is instructed to discover the scope of this Windows account, it might only identify the servers that have been accessed previously by the account, without revealing the full extent of its access or the actions performed.


Quantum networking advances on Earth and in space

“The most established use case of quantum networking to date is quantum key distribution — QKD — a technology first commercialized around 2003,” says Monga. “Since then, substantial advancements have been achieved globally in the development and production deployment of QKD, which leverages secure quantum channels to exchange encryption keys, ensuring data transfer security over conventional networks.” Quantum key distribution networks are already up and running, and are being used by companies, he says, in the U.S., in Europe, and in China. “Many commercial companies and startups now offer QKD products, providing secure quantum channels for the exchange of encryption keys, which ensures the safe transfer of data over traditional networks,” he says. Companies offering QKD include Toshiba, ID Quantique, LuxQuanta, HEQA Security, Think Quantum, and others. One enterprise already using a quantum network to secure communications is JPMorgan Chase, which is connecting two data centers with a high-speed quantum network over fiber. It also has a third quantum node set up to test next-generation quantum technologies. Meanwhile, the need for secure quantum networks is higher than ever, as quantum computers get closer to prime time.


What are the Key Challenges in Mobile App Testing?

One of the major issues in mobile app testing is the sheer variety of devices in the market. With numerous models, each having different screen sizes, pixel densities, operating system (OS) versions and hardware specifications, ensuring the app is responsive across all devices becomes a task. Testing for compatibility on every device and OS can be tiresome and expensive. While tools like emulators and cloud-based testing platforms can help, it remains essential to conduct tests on real devices to ensure accurate results. ... In addition to device fragmentation, another key challenge is the wide range of OS versions. A device may run one version of an OS while another runs on a different version, leading to inconsistencies in app performance. Just like any other software, mobile apps need to function seamlessly across multiple OS versions, including Android, iPhone Operating System (iOS) and other platforms. Furthermore, OS are updated frequently, which can cause apps to break or not function. ... Mobile app users interact with apps under various network conditions, including Wi-Fi, 4G, 5G or limited connectivity. Testing how an app performs in different network conditions is crucial to ensure it does not hang or load slowly when the connection is weak. 


Reimagining KYC to Meet Regulatory Scrutiny

Implementing AI and ML allows KYC to run in the background rather than having staff manually review information as they can, said Jennifer Pitt, senior analyst for fraud and cybersecurity with Javelin Strategy & Research. “This allows the KYC team to shift to other business areas that require more human interaction like investigations,” Pitt said. Yet use of AI and ML remains low at many banks. Currently, fraudsters and cybercriminals are using generative adversarial networks - machine learning models that create new data that mirrors a training set - to make fraud less detectable. Fraud professionals should leverage generative adversarial networks to create large datasets that closely mirror actual fraudulent behavior. This process involves using a generator to create synthetic transaction data and a discriminator to distinguish between real and synthetic data. By training these models iteratively, the generator improves its ability to produce realistic fraudulent transactions, allowing fraud professionals to simulate emerging fraud types and account takeovers, and enhance detection models’ sensitivity to these evolving threats. Instead of waiting to gather sufficient historical data from known fraudulent behaviors, GANs enable a more proactive approach, helping fraud teams quickly understand new fraud trends and patterns, Pitt said.


How Agentic AI Will Transform Banking (and Banks)

Agentic AI has two intertwined vectors. For banks, one path is internal, and focused on operational efficiency for tasks including the automation of routine data entry and compliance and regulatory checks, summaries of email and reports, and the construction of predictive models for trading and risk management to bolster insights into market dynamics, fraud and credit and liquidity risk. The other path is consumer facing, and revolves around managing customer relationships, from automated help desks staffed by chatbots to personalized investment portfolio recommendations. Both trajectories aim to improve efficiency and reduce costs. Agentic AI "could have a bigger impact on the economy and finance than the internet era," Citigroup wrote in a January 2025 report that calls the technology the "Do It For Me" Economy. ... Meanwhile, automated AI decisions could inadvertently violate laws and regulations on consumer protection, anti-money laundering or fair lending laws. Agentic AI that can instruct an agent to make a trade based on bad data or assumptions could lead to financial losses and create systemic risk within the banking system. "Human oversight is still needed to oversee inputs and review the decisioning process," Davis says. 

Daily Tech Digest - November 21, 2024

Building Resilient Cloud Architectures for Post-Disaster IT Recovery

A resilient cloud architecture is designed to maintain functionality and service quality during disruptive events. These architectures ensure that critical business applications remain accessible, data remains secure, and recovery times are minimized, allowing organizations to maintain operations even under adverse conditions. To achieve resilience, cloud architectures must be built with redundancy, reliability, and scalability in mind. This involves a combination of technologies, strategies, and architectural patterns that, when applied collect ... Cloud-based DRaaS solutions allow organizations to recover critical workloads quickly by replicating environments in a secondary cloud region. This ensures that essential services can be restored promptly in the event of a disruption. Automated backups, on the other hand, ensure that all extracted data is continually saved and stored in a secure environment. Using regular snapshots can also provide rapid restoration points, giving teams the ability to revert systems to a pre-disaster state efficiently. ... Infrastructure as code (IaC) allows for the automated setup and configuration of cloud resources, providing a faster recovery process after an incident. 


Agile Security Sprints: Baking Security into the SDLC

Making agile security sprints effective requires organizations to embrace security as a continuous, collaborative effort. The first step? Integrating security tasks into the product backlog right alongside functional requirements. This approach ensures that security considerations are tackled within the same sprint, allowing teams to address potential vulnerabilities as they arise — not after the fact when they're harder and more expensive to fix. ... By addressing security iteratively, teams can continuously improve their security posture, reducing the risk of vulnerabilities becoming unmanageable. Catching security issues early in the development lifecycle minimizes delays, enabling faster, more secure releases, which is critical in a competitive development landscape. The emphasis on collaboration between development and security teams breaks down silos, fostering a culture of shared responsibility and enhancing the overall security-consciousness of the organization. Quickly addressing security issues is often far more cost-effective than dealing with them post-deployment, making agile security sprints a necessary choice for organizations looking to balance speed with security.


The new paradigm: Architecting the data stack for AI agents

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. This essentially means that the platform being used should ingest and process data in real-time from all major sources, have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. “Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat. ... “No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized.


Enhancing visibility for better security in multi-cloud and hybrid environments

The number one challenge for infrastructure and cloud security teams is visibility into their overall risk–especially in complex environments like cloud, hybrid cloud, containers, and Kubernetes. Kubernetes is now the tool of choice for orchestrating and running microservices in containers, but it has also been one of the last areas to catch speed from a security perspective, leaving many security teams feeling caught on their heels. This is true even if they have deployed admission control or have other container security measures in place. Teams need a security tool in place that can show them who is accessing their workloads and what is happening in them at any given moment, as these environments have an ephemeral nature to them. A lot of legacy tooling just has not kept up with this demand. The best visibility is achieved with tooling that allows for real-time visibility and real-time detection, not point-in-time snapshotting, which does not keep up with the ever-changing nature of modern cloud environments. To achieve better visibility in the cloud, automate security monitoring and alerting to reduce manual effort and ensure comprehensive coverage. Centralize security data using dashboards or log aggregation tools to consolidate insights from across your cloud platforms.


How Augmented Reality is Shaping EV Development and Design

Traditionally, prototyping has been a costly and time-consuming stage in vehicle development, often requiring multiple physical models and extensive trial and error. AR is disrupting this process by enabling engineers to create and test virtual prototypes before building physical ones. Through immersive visualizations, teams can virtually assess design aspects like fit, function, and aesthetics, streamlining modifications and significantly shortening development cycles. ... One of the key shifts in EV manufacturing is the emphasis on consumer-centric design. EV buyers today expect not just efficiency but also vehicles that reflect their lifestyle choices, from customizable interiors to cutting-edge tech features. AR offers manufacturers a way to directly engage consumers in the design process, offering a virtual showroom experience that enhances the customization journey. ... AR-assisted training is one frontier seeing a lot of adoption. By removing humans from dangerous scenarios while still allowing them to interact with those same scenarios, companies can increase safety while still offering practical training. In one example from Volvo, augmented reality is allowing first responders to assess damage on EV vehicles and proceed with caution.


Digital twins: The key to unlocking end-to-end supply chain growth

Digital twins can be used to model the interaction between physical and digital processes all along the supply chain—from product ideation and manufacturing to warehousing and distribution, from in-store or online purchases to shipping and returns. Thus, digital twins paint a clear picture of an optimal end-to-end supply chain process. What’s more, paired with today’s advances in predictive AI, digital twins can become both predictive and prescriptive. They can predict future scenarios to suggest areas for improvement or growth, ultimately leading to a self-monitoring and self-healing supply chain. In other words, digital twins empower the switch from heuristic-based supply chain management to dynamic and granular optimization, providing a 360-degree view of value and performance leakage. To understand how a self-healing supply chain might work in practice, let’s look at one example: using digital twins, a retailer sets dynamic SKU-level safety stock targets for each fulfillment center that dynamically evolve with localized and seasonal demand patterns. Moreover, this granular optimization is applied not just to inventory management but also to every part of the end-to-end supply chain—from procurement and product design to manufacturing and demand forecasting. 


Illegal Crypto Mining: How Businesses Can Prevent Themselves From Being ‘Cryptojacked’

Business leaders might believe that illegal crypto mining programs pose no risks to their operations. Considering the number of resources most businesses dedicate to cybersecurity, it might seem like a low priority in comparison to other risks. However, the successful deployment of malicious crypto mining software can lead to even more risks for businesses, putting their cybersecurity posture in jeopardy. Malware and other forms of malicious software can drain computing resources, cutting the life expectancy of computer hardware. This can decrease the long-term performance and productivity of all infected computers and devices. Additionally, the large amount of energy required to support the high computing power of crypto mining can drain electricity across the organization. But one of the most severe risks associated with malicious crypto mining software is that it can include other code that exploits existing vulnerabilities. ... While powerful cybersecurity tools are certainly important, there’s no single solution to combat illegal crypto mining. But there are different strategies that business leaders can implement to reduce the likelihood of a breach, and mitigating human error is among the most important. 


10 Most Impactful PAM Use Cases for Enhancing Organizational Security

Security extends beyond internal employees as collaborations with third parties also introduce vulnerabilities. PAM solutions allow you to provide vendors with time-limited, task-specific access to your systems and monitor their activity in real time. With PAM, you can also promptly revoke third-party access when a project is completed, ensuring no dormant accounts remain unattended. Suppose you engage third-party administrators to manage your database. In this case, PAM enables you to restrict their access based on a "need-to-know" basis, track their activities within your systems, and automatically remove their access once they complete the job. ... Reused or weak passwords are easy targets for attackers. Relying on manual password management adds another layer of risk, as it is both tedious and prone to human error. That's where PAM solutions with password management capabilities can make a difference. Such solutions can help you secure passwords throughout their entire lifecycle — from creation and storage to automatic rotation. By handling credentials with such PAM solutions and setting permissions according to user roles, you can make sure all the passwords are accessible only to authorized users. 


The Information Value Chain as a Framework for Tackling Disinformation

The information value chain has three stages: production, distribution, and consumption. Claire Wardle proposed an early version of this framework in 2017. Since then, scholars have suggested tackling disinformation through an economics lens. Using this approach, we can understand production as supply, consumption as demand, and distribution as a marketplace. In so doing, we can single out key stakeholders at each stage and determine how best to engage them to combat disinformation. By seeing disinformation as a commodity, we can better identify and address the underlying motivations ... When it comes to the disinformation marketplace, disinformation experts mostly agree it is appropriate to point the finger at Big Tech. Profit-driven social media platforms have understood for years that our attention is the ultimate gold mine and that inflammatory content is what attracts the most attention. There is, therefore, a direct correlation between how much disinformation circulates on a platform and how much money it makes from advertising. ... To tackle disinformation, we must think like economists, not just like fact-checkers, technologists, or investigators. We must understand the disinformation value chain and identify the actors and their incentives, obstacles, and motivations at each stage.


Why do developers love clean code but hate writing documentation?

In fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation can be challenging. Developers often deprioritize documentation due to tight deadlines and a focus on delivering working code. This leads to informal, hard-to-understand documentation that quickly becomes outdated as the software evolves. Another significant issue is that documentation is frequently viewed as unnecessary overhead. Developers may believe that code should be self-explanatory or that documentation slows down the development process. ... To prevent documentation from becoming a second-class citizen in the software development lifecycle, Ferri-Beneditti argues that documentation needs to be observable, something that can be measured against the KPIs and goals developers and their managers often use when delivering projects. ... By offloading the burden of documentation creation onto AI, developers are free to stay in their flow state, focusing on the tasks they enjoy—building and problem-solving—while still ensuring that the documentation remains comprehensive and up-to-date. Perhaps most importantly, this synergy between GenAI and human developers does not remove human oversight. 



Quote for the day:

"The harder you work for something, the greater you'll feel when you achieve it." -- Unknown

Daily Tech Digest - October 01, 2024

9 types of phishing attacks and how to identify them

Different victims, different paydays. A phishing attack specifically targeting an enterprise’s top executives is called whaling, as the victim is considered to be high-value, and the stolen information will be more valuable than what a regular employee may offer. The account credentials belonging to a CEO will open more doors than an entry-level employee. The goal is to steal data, employee information, and cash. ... Clone phishing requires the attacker to create a nearly identical replica of a legitimate message to trick the victim into thinking it is real. The email is sent from an address resembling the legitimate sender, and the body of the message looks the same as a previous message. The only difference is that the attachment or the link in the message has been swapped out with a malicious one. ... Snowshoeing, or “hit-and-run” spam, requires attackers to push out messages via multiple domains and IP addresses. Each IP address sends out a low volume of messages, so reputation- or volume-based spam filtering technologies can’t recognize and block malicious messages right away. Some of the messages make it to the email inboxes before the filters learn to block them.


The End Of The SaaS Era: Rethinking Software’s Role In Business

While the traditional SaaS model may be losing its luster, software itself remains a critical component of modern business operations. The key shift is in how companies think about and utilize software. Rather than viewing it as a standalone business model, forward-thinking entrepreneurs and executives are beginning to see software as a powerful tool for creating value in other business contexts. ... Consider a hypothetical scenario where a tech company develops an AI-powered inventory management system that dramatically improves efficiency for retail businesses. Instead of simply selling this system as a SaaS product, the company could use it as leverage to acquire successful retail operations. By implementing their proprietary software, they could significantly boost the profitability of these businesses, creating value far beyond what they might have captured through traditional software licensing. ... Proponents of this new approach argue that while others will eventually catch up in terms of software capabilities, the first-movers will have already used their technological edge to acquire valuable real-world assets. 


How Agentless Security Can Prevent Major Ops Outages

An agentless security model is a modern way to secure cloud environments without installing agents on each workload. It uses cloud providers’ native tools and APIs to monitor and protect assets like virtual machines, containers and serverless functions. Here’s how it works: Data is collected through API calls, providing real-time insights into vulnerabilities. A secure proxy ensures seamless communication without affecting performance. This model continuously scans workloads, offering 100% visibility and detecting issues without disruption. ... Instead of picking between agent-based and agentless security, you can use both together. Agent-based security works best for stable, less-changing systems. It offers deep, ongoing monitoring when things stay the same. On the other hand, agentless security is great for fast-paced cloud setups where new workloads come and go often. It gives real-time insights without needing to install anything, making it flexible for larger cloud systems. A hybrid approach gives you stronger protection and keeps up with changing threats, making sure your defenses are ready for whatever comes next.


The inner workings of a Conversational AI

The initial stage of interaction between a user and an AI system involves input processing. When a user submits a prompt, the system undergoes a series of preprocessing steps to transform raw text into a structured format suitable for machine comprehension. Natural Language Processing (NLP) techniques are employed to break down the text into individual words or tokens, a process known as tokenization. ... Once the system has a firm grasp of the user’s intent through input processing, it embarks on the crucial phase of knowledge retrieval. This involves sifting through vast repositories of information to extract relevant data. Traditional information retrieval techniques like BM25 or TF-IDF are employed to match the processed query with indexed documents. An inverted index, a data structure mapping words to the documents containing them, accelerates this search process. ... With relevant information gathered, the system transitions to the final phase: response generation. This involves constructing a coherent and informative text that directly addresses the user’s query. Natural Language Generation (NLG) techniques are employed to transform structured data into human-readable language.


Can We Ever Trust AI Agents?

The consequences of misplaced trust in AI agents could be dire. Imagine an AI-powered financial advisor that inadvertently crashes markets due to a misinterpreted data point, or a healthcare AI that recommends incorrect treatments based on biased training data. The potential for harm is not limited to individual sectors; as AI agents become more integrated into our daily lives, their influence grows exponentially. A misstep could ripple through society, affecting everything from personal privacy to global economics. At the heart of this trust deficit lies a fundamental issue: centralization. The development and deployment of AI models have largely been the purview of a handful of tech giants. ... The tools for building trust in AI agents already exist. Blockchains can enable verifiable computation, ensuring that AI actions are auditable and traceable. Every decision an AI agent makes could be recorded on a public ledger, allowing for unprecedented transparency. Concurrently, advanced cryptographic techniques like trusted execution environment machine learning (TeeML) can protect sensitive data and maintain model integrity, achieving both transparency and privacy.


Reducing credential complexity with identity federation

One potential challenge organizations may encounter when implementing federated identity management in cross-organization collaborations is ensuring a seamless trust relationship between multiple identity providers and service providers. If the trust isn’t well established or managed, it can lead to security vulnerabilities or authentication issues. Additionally, the complexity of managing multiple identity providers can become problematic if there is a need to merge user identities across systems. For example, ensuring that all identity providers fulfill their roles without conflicting or creating duplicate identities can be challenging. Finally, while federated identity management improves convenience, it can come at the cost of time-consuming engineering and IT work to set up and maintain these IdP-SP connections. Traditional in-house implementation may also mean these connections are 1:1 and hard-coded, which will make ongoing modifications even tougher. Organizations need to balance the benefits of federated identity management against the time and cost investment needed, whether they do it in-house or with a third-party solution.


AI: Maximizing innovation for good

Businesses need to understand that AI technology will be here to stay. Strong AI strategies consider the purpose and objectives of considering AI, explaining the processes for businesses to prove value and absorb the rapid pace of change, considering the technology itself. Implementation needs to ensure that solutions mesh effectively with IT infrastructure that’s already in place. Digitalization, digital transformation, and upgrading legacy systems, as overarching initiatives, require planning and understanding of how they will impact wider business functions. That’s not to say it needs to be slow or cumbersome, however – one of the joys on AI is the ease with which it can put powerful new capabilities in the hands of teams. When due diligence is conducted effectively, AI integration can become the lynchpin to elevate business practices – boosting productivity, efficiency, and lowering costs. The opportunities for improvements cannot be understated, especially when looking at wider settings outside of just industrial or financial sectors. Ultimately, overreaching when implementing AI, can create a situation where integrated tools muddy the water and dilute the effectiveness of their intended use.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. Although PAM solutions possess the capability for scanning and discovering privileged accounts, these solutions are limited by the scope of the instructions they receive, thus providing only partial visibility into system access and usage.


Microsoft researchers propose framework for building data-augmented LLM applications

“Data augmented LLM applications is not a one-size-fits-all solution,” the researchers write. “The real-world demands, particularly in expert domains, are highly complex and can vary significantly in their relationship with given data and the reasoning difficulties they require.” To address this complexity, the researchers propose a four-level categorization of user queries based on the type of external data required and the cognitive processing involved in generating accurate and relevant responses: – Explicit facts: Queries that require retrieving explicitly stated facts from the data. – Implicit facts: Queries that require inferring information not explicitly stated in the data, often involving basic reasoning or common sense. – Interpretable rationales: Queries that require understanding and applying domain-specific rationales or rules that are explicitly provided in external resources. – Hidden rationales: Queries that require uncovering and leveraging implicit domain-specific reasoning methods or strategies that are not explicitly described in the data. Each level of query presents unique challenges and requires specific solutions to effectively address them.


Unleashing the Power Of Business Application Integration

In many cases, businesses are replacing their legacy software solutions with a modular selection of applications hosted within a public cloud environment. Given the increasing maturity of this market, there is now a range of application stores and marketplaces from the likes of AWS, Microsoft and Google. These have made it much easier for IT teams to identify, purchase and integrate proven applications as part of a bespoke, enterprise-wide ERP strategy. ... once IT teams have selected and integrated the right business applications within their environment, the next step is to focus on data strategy. The main objective here should be to ensure that data is of the highest quality and can be used to address a diverse range of key business objectives, from driving profit, efficiency and innovation to improving customer service. This process can be complex and challenging, but there are a number of steps organisations can take to fully exploit their data assets. These include optimising the performance and availability of an existing data environment and prioritising data systems migration.



Quote for the day:

"The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself." -- Mark Caine