Showing posts with label digital human. Show all posts
Showing posts with label digital human. Show all posts

Daily Tech Digest - March 15, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


Guardians of AIoT: Protecting Smart Devices from Data Poisoning

Machine learning algorithms rely on datasets to identify and predict patterns. The quality and completeness of this data determines the performance of the model is determined by the quality and completeness of this data. Data poisoning attacks tamper the knowledge of the AI by introducing false or misleading information and usually following these steps: The attacker manipulates the data by gaining access to the training dataset and injects malicious samples; The AI is now getting trained on the poisoned data and incorporates these corrupt patterns into its decision-making process; Once the poisoned data is deployed, the attackers now exploit it to bypass a security system or tamper critical tasks. ... The addition of AI into IoT ecosystems has intensified the potential attack surface. Traditional IoT devices were limited in functionality, but AIoT systems rely on data-driven intelligence, which makes them more vulnerable to such attacks and hence, challenge the security of the devices: AIoT devices collect data from different sources which increases the likelihood of data being tampered; The poisoned data can have catastrophic effects on the real-time decision making; Many IoT devices possess limited computational power to implement strong security measures which makes them easy targets for these attacks.


Preparing for The Future of Work with Digital Humans

For businesses to prepare their staff for the workplace of tomorrow, they need to embrace the technologies of tomorrow—namely, digital humans. These advanced solutions will empower L&D leaders to drive immersive learning experiences for their staff. Digital humans use various technologies and techniques like conversational AI, large language models (LLMs), retrieval augmented generation, digital human avatars, virtual reality (VR,) and generative AI to produce engaging and interactive scenarios that are perfect for training. Recall that a major issue with current training methods is that staff never have opportunities to apply the information they just consumed, resulting in the loss of said information. Digital humans avoid this problem by generating lifelike roleplay scenarios where trainees can actually apply and practice what they have learned, reinforcing knowledge retention. In a sales training example, the digital human takes on the role of a customer, allowing the employee to practice their pitch for a new product or service. The employee can rehearse in realistic conditions rather than studying the details of the new product or service and jumping on a call with a live customer. A detractor might push back and say that digital humans lack a necessary human element.


3 ways test impact analysis optimizes testing in Agile sprints

Code modifications or application changes inherently present risks by potentially introducing new bugs. Not thoroughly validating these changes through testing and review processes can lead to unintended consequences—destabilizing the system and compromising its functionality and reliability. However, validating code changes can be challenging, as it requires developers and testers to either rerun their entire test suites every time changes occur or to manually identify which test cases are impacted by code modifications, which is time-consuming and not optimal in Agile sprints. ... Test impact analysis automates the change analysis process, providing teams with the information they need to focus their testing efforts and resources on validating application changes for each set of code commits versus retesting the entire application each time changes occur. ... In UI and end-to-end verifications, test impact analysis offers significant benefits by addressing the challenge of slow test execution and minimizing the wait time for regression testing after application changes. UI and end-to-end testing are resource-intensive because they simulate comprehensive user interactions across various components, requiring significant computational power and time. 


No one knows what the hell an AI agent is

Well, agents — like AI — are a nebulous thing, and they’re constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI’s Operator, Google’s Project Mariner, and Perplexity’s shopping agent — and their capabilities are all over the map. Rich Villars, GVP of worldwide research at IDC, noted that tech companies “have a long history” of not rigidly adhering to technical definitions. “They care more about what they are trying to accomplish” on a technical level, Villars told TechCrunch, “especially in fast-evolving markets.” But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai. “The concepts of AI ‘agents’ and ‘agentic’ workflows used to have a technical meaning,” Ng said in a recent interview, “but about a year ago, marketers and a few big companies got a hold of them.” The lack of a unified definition for agents is both an opportunity and a challenge, Jim Rowan, head of AI for Deloitte, says. On the one hand, the ambiguity allows for flexibility, letting companies customize agents to their needs. On the other, it may — and arguably already has — lead to “misaligned expectations” and difficulties in measuring the value and ROI from agentic projects. “Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes,” Rowan said. 


Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled "Auditing language models for hidden objectives," Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or "personas." The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research. While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users. While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models. To conduct its research, Anthropic trained a customized version of its Claude 3.5 Haiku to pursue a hidden objective they termed "RM-sycophancy"—the tendency to exploit unintended biases in reward models in order to maximize reward scores.


Strategies for Success in the Age of Intelligent Automation

Firstly, the integration of AI into existing organizational frameworks calls for a largely collaborative environment. It is imperative for employees to perceive AI not as a usurper of employment, but instead as an ally in achieving collective organizational goals. Cultivating a culture of collaboration between AI systems and human workers is essential to the successful deployment of intelligent automation. Organizations should focus on fostering open communication channels, ensuring that employees understand how AI can enhance their roles and contribute to the organization’s success. To achieve this, leadership must actively engage with employees, addressing concerns and highlighting the benefits of AI integration. ... The ethical ramifications of AI workforce deployment demand meticulous scrutiny. Transparency, accountability, and fairness are integral and their importance can’t be overstated. It’s vital that AI-driven decisions are aligned with ethical standards. Organizations are responsible for establishing robust ethical frameworks that govern AI interactions, mitigating potential biases and ensuring equitable outcomes. The best way to do this requires implementing standards for monitoring AI systems, ensuring they operate within defined ethical boundaries.


AI & Innovation: The Good, the Useless – and the Ugly

First things first: there is good innovation, the kind that genuinely benefits society. AI that enhances energy efficiency in manufacturing, aids scientific discoveries, improves extreme weather prediction, and optimizes resource use in companies falls into this category. Governments can foster those innovations through targeted R&D support, incentives for firms to develop and deploy AI, “buy European tech” procurement policies, and investments in robust digital infrastructure. The Competitiveness Compass outlines similar strategies. That said, given how many different technologies are lumped together in the AI category—everything from facial recognition technology to smart ad tech, ChatGPT, and advanced robotics—it makes little sense to talk about good innovation and “AI and productivity” in the abstract. Most hype these days is about generative AI systems that mimic human creative abilities with striking aptitude. Yet, how transformative will an improved ChatGPT be for businesses? It might streamline some organizational processes, expedite data processing, and automate routine content generation. For some industries, like insurance companies, such capabilities may be revolutionary. For many others, its innovation footprint will be much more modest. 


Revolution at the Edge: How Edge Computing is Powering Faster Data Processing

Due to its unparalleled advantages, edge computing is rapidly becoming the primary supporting technology of industries where speed, reliability, or efficiency aren’t just useful but imperative. Just like edge computing helps industries remain functional and up to date, staying informed with the latest sports news is important for every fan. Follow Facebook MelBet and receive real-time alerts, insider information, and a touch of comedy through memes and behind-the-scenes videos all in one place. Subscribe and get even closer to the world of sport! Edge computing relies on IoT as its most crucial component since there are billions of connected devices producing an immense and constant amount of data that needs to be processed right away. IoT devices in the residential sector, such as smart sensors in homes or Nest smart thermostats, as well as peripherals used for industrial automation in factories, all use edge computing. ... The way edge computing will function in the future is very exciting. With 5G, AI, and IoT, edge technologies are likely to become smarter, more widespread, and faster. Imagine a world where factories optimize themselves, smart traffic systems talk to autonomous vehicles, and healthcare devices stop illnesses from happening before they start.


Harnessing the data storm: three top trends shaping unstructured data storage and AI

The sheer volume of unstructured information generated by enterprises necessitates a new approach to storage. Object storage offers a better, more cost-effective method for handling significant datasets compared to traditional file-based systems. Unlike traditional storage methods, object storage treats each data item as a distinct object with its metadata. This approach offers both scalability and flexibility; ideal for managing the vast quantities of images, videos, sensor data, and other unstructured content generated by modern enterprises. ... Data lakes, the centralized repositories for both structured and unstructured data, are becoming increasingly sophisticated with the integration of AI and machine learning. These enable organizations to delve deeper into their data, uncovering hidden patterns and generating actionable insights without requiring complex and costly data preparation processes. ... The explosion of unstructured data presents both immense opportunities and challenges for organizations in every market across the globe. To thrive in this data-driven era, businesses must embrace innovative approaches to data storage, management, and analysis that are both cost-effective and compliant with evolving regulations. 


Open Source Tools Seen as Vital for AI in Hybrid Cloud Environments

The landscape of enterprise open source solutions is evolving rapidly, driven by the need for flexibility, scalability, and innovation. Enterprises are increasingly relying on open source technologies to drive digital transformation, accelerate software development, and foster collaboration across ecosystems. With advancements in cloud computing, AI, and containerization, open source solutions are shaping the future of IT by providing adaptable and secure platforms that meet evolving business needs. The active and diverse community support ensures continuous improvement, making open source a cornerstone of modern enterprise technology strategies. Red Hat's portfolio, including Red Hat Enterprise Linux, Red Hat OpenShift, Red Hat AI and Red Hat Ansible Automation Platform, provides robust platforms that support diverse workloads across hybrid and multi-cloud environments. Additionally, Red Hat's extensive partner ecosystem provides more seamless integration and support for a wide range of technologies and applications. Our commitment to open source principles and continuous innovation allows us to deliver solutions that are secure, scalable, and tailored to the needs of our customers. Open source has proven to be trusted and secure at the forefront of innovation


Daily Techj Digest - January 11, 2025

Managing Third-Party Risks in the Software Supply Chain

The myriad of third party risks such as, compromised or faulty software updates, insecure hardware or software components and insufficient security practices, expand the attack surface of the organization. A security breach in one such third party entity can ripple through and potentially lead to significant operational disruptions, financial losses and reputational damage to the organization. In view of this, securing not just their own organizations, but also the intricate web of suppliers, vendors and partners that make up their cyber supply chain is not just an option, but a necessity. It is needless to state that managing the third party risks is becoming a big challenge for the Chief Information Security Officers. More to it, it may not just be enough to maanage third-party risks but also fourth party risks as well. ... Mapping your most critical third-party relationships can identify weak links across your extended enterprise. But to be effective, it needs to go beyond third parties. In many cases, risks are often buried within complex subcontracting arrangements and other relationships, within both your supply chain and vendor partnerships. Illuminating your extended network to see beyond third parties is critical to assessing, mitigating and monitoring the risks posed by sub-tier suppliers.


6G, AI and Quantum: Shaping the Future of Connectivity, Computing and Security

Beyond 6G, another transformative technology that will reshape industries in 2025 is quantum computing. This isn’t just about faster processing; it’s about tackling problems that are currently intractable for even the most powerful conventional systems. Think of the implications for AI training itself – imagine feeding massive, complex datasets into quantum-powered algorithms. The potential for breakthroughs in AI research and development is immense. This next-gen computational power is expected to solve complex problems that were previously deemed unsolvable, ushering in a new era of innovation and efficiency. The impact of these developments will be felt in a range of industries such as pharmaceuticals, cryptography and supply chains. For instance, in the pharmaceutical sector, quantum computing is set to speed up drug discovery. ... The rise of distributed cloud models and edge computing will also speed up services and provide value and innovation – placing cloud technology at the centre of every organisation’s strategic roadmap. Leveraging cloud infrastructure allows businesses to rapidly scale AI models, process enormous volumes of data in real-time, and generate actionable insights that facilitate intelligent decision-making. 


Advancing Platform Accountability: The Promise and Perils of DSA Risk Assessments

Multiple risk assessments fail to meaningfully consider risks related to problematic and harmful use and the design or functioning of their service and systems. Facebook’s 2024 risk assessment assesses physical and mental wellbeing in a crosscutting way but does not meaningfully consider risks related to excessive use or addiction. Other assessments more centrally consider physical and mental well-being risks. ... Snap’s risk assessment devotes seven pages to physical and mental well-being risks, but the assessment fails to consider how platform design could contribute to physical and mental well-being risks by incentivizing problematic or harmful use. Snap’s assessment is broadly focused on risks related to harmful content. The assessment describes mitigations to reduce the prevalence of such content that could impact physical and mental well-being – including auto-moderating for abusive content or ensuring recommender systems do not recommend violative content. This, of course, is important. However, the risk assessment and review of mitigations place almost no emphasis on risks of excessive use actually driven by Snap’s design. Snap’s focus on ephemeral content is presented as only a benefit – “conversations on Snapchat delete by default to reflect real-life conversations.”


Hard and Soft Skills Go Hand-in-Hand — These Are the Ones You Need to Sharpen This Year

To most effectively harness the power of AI in 2025, leaders need to understand it. DataCamp's Matt Crabtree describes AI literacy, at its most basic, as having the skills and competencies required to use AI technologies and applications effectively. But it's much more than that: Crabtree points out that AI literacy is also about enabling people to make informed decisions about how they're using AI, understand the implications of those uses and navigate the ethical considerations they present. For leaders, that means understanding biases that remain embedded in AI systems, privacy concerns, and the need for transparency and accountability. Say you're looking to integrate AI into your hiring process, as we have at my company, Jotform. It's important to understand that while it can be used for tasks like scheduling interviews, screening resumes for objective criteria or helping to organize candidate information, it should not be making hiring decisions for you. AI still has a significant bias problem, in addition to the many other ways in which it lacks the soft skills required for certain, human-only tasks. AI literacy is about understanding its shortcomings and navigating them in a way that is fair and equitable.


The Tech Blanket: Building a Seamless Tech Ecosystem

The days of disconnected platforms are over. In 2025, businesses will embrace platform interoperability to ensure that knowledge and data flow seamlessly across departments. Think of your organization’s technology as a woven blanket—each tool and system represents a thread that, when tightly interwoven, creates a strong, cohesive layer of support that covers your entire company. ... Building a seamless ecosystem begins with establishing a framework for managing distributed information. By creating a Knowledge Asset Center of Excellence, organizations can define norms for how data and knowledge are shared and governed. This approach fosters collaboration while allowing teams the flexibility to work in ways that suit their unique needs. ... As platforms become more interconnected, ensuring robust security becomes critical. Data breaches or inaccuracies in one tool can ripple across the ecosystem, creating significant risks. Leaders must prioritize tools with advanced security features, such as encryption and role-based access controls, to protect sensitive information while maintaining seamless interoperability. Strong data governance policies are also essential. By continuously monitoring data flow and usage, organizations can safeguard the integrity of their knowledge assets while promoting responsible collaboration.


WebAssembly and Containers’ Love Affair on Kubernetes

WebAssembly is showing promise on Kubernetes thanks to the fact that WebAssembly now meets the OCI registry standard as OCI artifacts. This enables Wasm to meet the Kubernetes standard and the OCI standard for containerization, specifically the OCI artifact format. It also involves compatibility with Kubernetes pods, storage interfaces and more. In that respect, it’s one step toward using Wasm as an alternative to containers. Additionally, through containerd, WebAssembly components can be distributed side by side with containers in Kubernetes environments. Zhou likened this to a drop-in replacement for the unit’s containers, integrating with tools such as Istio, Dapr and OpenTelemetry Collector. ... When running applications through WebAssembly as sidecars in a cluster, the two main challenges involve distribution and deployment, as Zhou outlined. A naive approach bundles the Wasm runtime into a container, but a better method offloads the Wasm runtime into the shim process in containerd. This approach allows Kubernetes orchestration of Wasm workloads. The OCI artifact format for WebAssembly, enabling Wasm components to use the same distribution mechanisms as containers, is responsible for the distribution part, Zhou said.


Training Employees for the Future with Digital Humans

Digital humans leverage a host of advanced technologies, large language models, retrieval-augmented generation, and intelligent AI orchestrators, among them. They also use unique techniques like kinesthetic learning, or “learning by doing,” alongside on-screen visuals to better illustrate more complicated topics. Note that digital humans are not like traditional chatbots that follow structured dialog trees. Instead, they can respond dynamically to the employee's inputs to ensure interactions are as lifelike as possible. ... By allowing employees to apply their training in real-world scenarios, digital humans help them keep more information in a shorter amount of time, reducing traditional training timelines significantly. As a result, businesses will spend less money and time reskilling personnel. The training possibilities with digital humans are vast, helping employees learn to use new technologies and systems. In a sales setting, personnel can practice using new generative AI-powered customer service tools while a digital human pretends to be a customer. Digital humans could also help engineers in the automotive space learn how to use machine-learning solutions or operate 3D printing machines.


From Silos to Synergy: Transforming Threat Intelligence Sharing in 2025

Put simply, organizations must break down the silos between ALL teams involved in security. This is not just about understanding the organization’s cyber hygiene, but it is also about understanding the layers that an attacker would have to get through to exploit and conduct potentially nefarious activities within the business. Once this insight is gained this enables teams to work through requirements and align the CTI program for specific stakeholders. This means that both offense and defense teams are working together, mapping out the attack path and gaining a better understanding of defense. Doing this will provide a better understanding of offense as teams scout to look at what could be effective, going to the next layer to consider what might be vulnerable and whether they have mitigating controls in place to provide any additional prevention. ... In the past, teams working on-site together would document their work on a whiteboard. Now, with the advent of remote working, there are fewer opportunities to share in person, and a plethora of communication channels that lead to knowledge fragmentation as different people use different tools such as Slack or other messaging platforms, or would just share intelligence one-on-one.


Explained: The Multifaceted Nature of Digital Twins

Beyond operational improvements, digital twins also drive innovation at scale. Large enterprises with multiple R&D hubs can test new designs or processes in a virtual environment before deploying them globally. For example, an automotive company developing an electric vehicle can simulate how it will perform under different driving conditions, regulatory frameworks and consumer preferences in diverse markets - all within a digital twin. ... Building and maintaining a digital twin requires significant investment in IoT infrastructure, cloud computing, AI and skilled personnel. For many companies, particularly small and medium-sized enterprises, these costs can be prohibitive. A McKinsey study highlights that digital maturity - the ability to effectively integrate and utilize advanced technologies - is often a key barrier. Seventy-five percent of companies that have adopted digital-twin technologies are those that have achieved at least medium levels of complexity. Large enterprises can justify the cost of digital twins by applying them across multiple facilities or product lines, but for smaller companies, the benefits may not scale as effectively, making it harder to achieve a return on investment.


Design Patterns for Building Resilient Systems

You may have some parts of your system that are degrading performance and may be affecting cascading failures everywhere. So that means that when your client requests a specific part that’s working fine, it’s great, but you want to stop immediately what’s causing the fire. That way, you have different load balancing rules that I’ve defined here to say, okay, this part of our system is degrading performance; it’s starting to affect everything else, and it’s cascading failures. We’re just going to stop it so you can’t even make a request to this route because it’s the one causing all the issues. Having your clients handle that failure to that request gracefully can be incredibly important because then the rest of your system can still work. Maybe some particular routes you’re defining aren’t going to work; some parts of your system will just be unavailable, but it’s not taking down the entire thing. Ultimately, what I’m talking about there is bulkheads. ... Now, while the CrowdStrike incident didn’t directly affect me, it sure did indirectly because I knew about it right away from the alarms based on metrics. When used correctly within context, design patterns allow you to build a resilient system. Now, everything we had in place for resilience helped; they worked. But as always, when something like this happens, it makes you re-evaluate specific individual contexts. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - July 26, 2023

How digital humans can make healthcare technology more patient-centric

Like humans, digital humans have anatomy. Several technologies are used to create digital humans. The Representation: The “face” of the digital entity can be created in likeness to a real or caricature of a human. The quality of this representation is critical to a successfully designed digital human. Natural Language Processing (NLP) or Natural Language Understanding (NLU): NLP/NLU ensures that the digital human can properly interpret information, such as speech detection, speech-to-text translation, and language recognition and detection. Advanced forms of NLP/NLU will include sign language as well. Cognitive Services: Cognitive services are used for creating personalized communication including language translation, speech synthesis, voice customization, speech prosody and pitch, nomenclature and specialized pronunciation. Artificial Intelligence: The AI layer–whether generative, extractive or other forms–provides contextual conversation response, context recognition and for generative AI, content creation.


CISO to BISO – What's your next role?

The role of a BISO has emerged over the past decade, as organisations recognise the need for dedicated security roles and skills within specific business units or departments. While it is challenging to pinpoint an exact date when the role of BISO became established across all industries, it can be traced back to the increasing emphasis on information security, the evolving nature of cybersecurity threats and the increasingly complex technical infrastructures in use. As businesses have become more digital, data-centric, and interconnected, the complexity and diversity of security risks have grown exponentially with it. Traditional approaches to information security, where the responsibility solely resides with the IT department or a centralised security team, have proved inadequate to address the unique security challenges faced by businesses today. ... When implementing information security in larger organisations, we would look for security champions within operational or support functions. People who showed some kind of interest in the world of cybersecurity usually resulted in them being offered a support role on a voluntary basis. 


Top cybersecurity tools aimed at protecting executives

A recent Ponemon report, sponsored by BlackCloak, revealed that 42% of respondents indicated that key executives and family members have already experienced at least one cyberattack. While it's likely that cybercriminals will target executives and the digital assets they have access to, organizations are not responding with suitable strategies, budgets, and staff, the report found. Just over half (58%) of respondents reported that the prevention of threats against executives and their digital assets is not covered in their cyber, IT and physical security strategies and budget. The lack of attention is demonstrated with only 38% of respondents reporting a dedicated team to prevent or respond to cyber or privacy attacks against executives and their families. The best practice to do this well would be to protect the executive as well as their family, inner circle, and associates with a broad range of measures, Agency's Executive Digital Protection report noted. The solutions need to balance breadth, value, privacy, and specialization, it said. 


How WebAssembly will transform edge computing

As the next major technical abstraction, Wasm aspires to address the common complexity inherent in the management of the day-to-day dependencies embedded into every application. It addresses the cost of operating applications that are distributed horizontally, across clouds and edges, to meet stringent performance and reliability requirements. Wasm’s tiny size and secure sandbox mean it can be safely executed everywhere. With a cold start time in the range of 5 to 50 microseconds, Wasm effectively solves the cold start problem. It is both compatible with Kubernetes while not being dependent upon it. Its diminutive size means it can be scaled to a significantly higher density than containers and, in many cases, it can even be performantly executed on demand with each invocation. But just how much smaller is a Wasm module compared to a micro-K8s containerized application? An optimized Wasm module is typically around 20 KB to 30 KB in size. When compared to a Kubernetes container, the Wasm compute units we want to distribute are several orders of magnitude smaller. 


Data Governance Trends and Best Practices for Storage Environments

The more intelligent the data layer is, the more value the data can provide. More valuable data makes the role of data governance stronger within the organization. Active archive solutions can serve as a framework for data governance by including an intelligent data management software layer that automatically places data where it belongs and optimizes its location based on cost, performance, and user access needs. “Data governance is the process of managing the availability, usability, integrity and security of enterprise data,” said Rich Gadomski, head of tape evangelism at FUJIFILM Recording Media U.S.A. and co-chair of the Active Archive Alliance. ... Supporting active archives with optical disk storage technologies can provide long-term data preservation. These technologies are designed to withstand environmental factors like temperature, humidity, and magnetic interference, ensuring the integrity and longevity of archived data. With a typical lifespan of hundreds of years or more, optical disks are well-suited for archival purposes.


Dr. Pankaj Setia on the challenges that will redefine CIOs’ careers

First, a risk-averse culture may be addressed through a two-pronged approach. First, CIOs must champion training and engagement of employees, to create a digital mindset and enhance understanding of the digital transformation being undertaken. It is imperative that the employees are excited about the transformation. ... A second step for CIOs is to work toward getting buy-in from top management. For CIOs to get desired results, the board and top management team (TMT) must actively champion digital transformation initiatives. Many examples from the corporate world underline the role of top leadership in engaging and motivating employee teams. Second, overcoming the barriers due to siloed strategy is a complex endeavor. It is not always easy to overcome these, as professional management relies on specialization in a functional domain (e.g., marketing, finance, human resources, etc.). However, because digital transformation inherently spans functional domains, siloed strategies — that emphasize super specialization — are not optimal. Therefore, CIOs should look to create cross-functional teams.


Risks and Strategies to Use Generative AI in Software Development

Among the risks of using AI in software development is the potential that it regurgitates bad code that has been making the rounds in the open-source world. “There’s bad code is being copied and used everywhere,” says Muddu Sudhakar, CEO and co-founder of Aisera, developer of a generative AI platform for enterprise. “That’s a big risk.” The risk is not simply poorly written code being repeated -- the bad code might be put into play by bad actors looking to introduce vulnerabilities they may exploit at a later date. Sudhakar says organizations that draw upon generative AI, and other open-source resources, should put controls in place to spot such risks if they intend to make AI part of the development equation. “It’s in their interest because all it takes is one bad code,” he says, pointing to the long-running hacking campaign behind the Solar Winds data breach. The skyrocketing appeal of AI for development seems to outweigh concerns about the potential for data to leak or for other issues to occur. “It’s so useful that it’s worth actually being aware of the risks and doing it anyway,” says Babak Hodjat, CTO of AI and head of Cognizant AI Labs.


Supply Chain, Open Source Pose Major Challenge to AI Systems

Bengio said one big risk area around AI systems is open-source technology, which "opens the door" to bad actors. Adversaries can take advantage of open-source technology without huge amounts of compute or strong expertise in cybersecurity, according to Bengio. He urged the federal government to establish a definition of what constitutes open-source technology - even if it changes over time - and use it to ensure future open-source releases for AI systems are vetted for potential misuse before being deployed. "Open source is great for scientific progress," Bengio said. "But if nuclear bombs were software, would you allow open-source nuclear bombs?" Bengio said the United States must ensure that spending on AI safety is equivalent to how much the private sector is spending on new AI capabilities, either through incentives to businesses or direct investment in nonprofit organizations. The safety investments should address the hardware used in AI systems as well as cybersecurity controls necessary to safeguard the software that powers AI systems.


Zero-Day Vulnerabilities Discovered in Global Emergency Services Communications Protocol

In a demonstration video of CVE-2022-24401, researchers showed that an attacker would be able to capture the encrypted message by targeting a radio to which the message was being sent. Midnight Blue founding partner Wouter Bokslag says that in none of the circumstances for this vulnerability do you get your hands on a key: "The only thing is you're getting is the key stream, which you can use to decrypt, arbitrary frames, or arbitrary messages that go over the network." A second demonstration video of CVE-2022-24402 reveals that there is a backdoor in the TEA1 algorithm that affects networks relying on TEA1 for confidentiality and integrity. It was also discovered that the TEA1 algorithm uses an 80-bit key that an attacker could do a brute-force attack on, and listen in to the communications undetected. Bokslag admits that using the term backdoor is strong, but it is justified in this instance. "As you feed an 80 bits key to TEA1, that flows through a reduction step and which leaves it with only 32 bits of key material, and it will carry on doing the decryption with only those 32 bits," he says.


Enterprises should layer-up security to avoid legal repercussions

There are two competing temptations in the technology landscape that the seasoned security professional must navigate. The first is the temptation to totally trust the power of the tool. An overly optimistic reliance on vendor tools and promises can fail to identify security issues if the tools are not properly implemented and operationalized in your environment. A shiny SIEM tool, for example, is useless unless you have clearly documented response actions to take for each alert, as well as fully trained personnel to handle investigations. The second temptation, which I believe is more prevalent within tech and SaaS companies, is to trust no tool except for in-house tech. The thought process goes as follows: “Since we have a solid development team, and we want to keep a bench of developers for any eventuality, we need to keep their skills sharp, so we might as well build our own tools.” It’s a sound argument — up to a point. However, it may be a bit arrogant to believe your company has the expertise to develop the best-in-class SIEM solutions, ticketing systems, SAST tools, and what have you.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - July 10, 2023

Digital Humans: Fad or Future?

A digital human is a computer-generated entity that looks, behaves, and interacts like a real human. “To create a digital human, advanced technologies such as artificial intelligence, machine learning, and natural language processing are used to replicate the complexities of human thought and behavior,” says Matthew Ramirez, a technology entrepreneur and investor. Going beyond concierge services, digital humans could eventually play important roles in areas as diverse as education, healthcare, and entertainment. ... Although digital humans promise multiple benefits, they also present a potential threat. They could be misused in various ways to mislead, defraud, or even physically harm people, Ramirez warns. “It’s crucial to be cautious and consider the negative consequences when creating digital humans, just like with any new technology,” he says. Improvements to generative AI programs are making digital humans more realistic, which increases the possibility that consumers may have difficulty distinguishing when they’re talking to a real human versus a digital human, Bechtel says.


Feds Urge Healthcare Providers, Vendors to Use Strong MFA

CISA recommends that entities implement phishing-resistant multifactor authentication, which can help detect and prevent disclosures of authentication data to a website or application masquerading as a legitimate system, the HHS bulletin says. For instance, phishing-resistant multifactor authentication could require a password or user biometric data, combined with an authenticator such as a personal identity verification card or other cryptographic hardware or software-based token authenticator, such as FIDO with WebAuthn authenticator, according to the bulletin. "The layered defense of a properly implemented multifactor authentication solution is stronger than single-factor authentication such as relying on a password alone," HHS OCR wrote. Walsh suggested that healthcare sector entities consider integrating password vaults with MFA. Also, "passwordless authentication is probably in the future but we haven’t seen it implemented in healthcare," he said. But the bottom line, he added, is that "any MFA is probably better than no MFA."


Generative AI is coming for your job. Here are 4 reasons to get excited

Yes, the fast-emerging technology could replace some workplace activities, but it's up to us to make sure its exploitation is focused on removing repetitive tasks, such as scanning spreadsheets for data-entry errors. "I think we should be excited because it has potential to allow us to do more of the high-value things in our work, and less of the stuff that doesn't need valuable thought processes," she says. Furby says it's important to recognize that the introduction of generative AI should not be seen as an endpoint, but as a pathway to increased productivity. ... AI's ability to pick up large chunks of the work associated with everyday activities could free up internal staff to focus on more innovative and interesting projects. "I think that's always a challenge in terms of how you become more efficient in the things that you can do, and how you can approach more topics and scale at speed. And I think that's where the excitement is – generative AI could help us." For all his enthusiasm for emerging technology, Langthorne doesn't want to dismiss the concerns of people who are worried about the rise of generative systems, such as ChatGPT.


UK regulator refers cloud infrastructure market for investigation

The news comes three months after Ofcom raised “significant concerns” about Amazon Web Services (AWS) and Microsoft, alleging that they were harming competition in cloud infrastructure services and abusing their market positions with practices that make interoperability difficult. Ofcom defines cloud infrastructure services as those which are built on physical servers and virtual machines hosted in data centers and consisting of infrastructure as a service (IaaS) products, such as storage, computing and networking, and platform as a service (PaaS), which includes the software tools needed to build and run applications. When the initial investigation was launched, Ofcom said that AWS and Microsoft Azure had a combined UK market share of between 60% and 70%, while the next nearest competitor, Alphabet-owned Google, has a 5% to 10% share. Consequently, between 2018 and 2021, the percentage of cloud providers that were not AWS, Microsoft, or Google fell from 30% to 19%, causing Ofcom to note that such levels of market dominance could potentially make it harder for smaller cloud providers to compete with the market leaders, further consolidating the big providers' revenue and market share.


6 business execs you’ll meet in hell — and how to deal with them

Some executives have exactly zero aptitude when it comes to the technology that enables them to run their businesses. And you probably shouldn’t expect them to, says Bob Stevens (not his real name), former CISO for a large retail operation. After all, they’re not being paid to think about technology; they’re being paid to sell products. “The CEO at that retail company was not a technologist,” says Stevens. “He found it totally uninteresting. So when the IT and security teams would present, his attention would quickly wane and he would start answering texts and reading email. He’d say, ‘Unfortunately, technology means nothing to me. I get that it is important to the company and that we have to have it. So I will manage the business value against the cost. Just don’t try to make me understand it.’” It can be demoralizing, Stevens adds. Worse, because senior leadership doesn’t fully understand the issues in play or the threats to the business, they may not prioritize investments appropriately. 


Greatest cyber threats to aircraft come from the ground

From a CISO's perspective, what matters is not that a specific security vulnerability was found in a particular model of aircraft, but rather the general idea that modern aircraft with interconnected IT networks could potentially allow intrusions into high security avionics equipment from low security passenger internet access systems. This being the case, the time has come for all onboard aircraft systems -- including avionics -- to be regarded as being vulnerable to cyberattacks. As such, the security procedures for protecting them should be as thorough and in-depth "as any other internet-connected device," Kiley says. "The disclosure I did in 2019 was the first major one that involved the industry, the airlines, and the US government cooperating to ensure that the disclosure was done responsibly and following security industry best practices. This should be a model for how to alert the industry of an issue responsibly." Unfortunately, "Many manufacturers in the aviation industry do not understand how to work with security researchers and instead attempt to stifle research by threatening action instead of working together to solve identified issues," observes Kiley.


Monolith or Microservices, or Both: Building Modern Distributed Applications in Go with Service Weaver

Google’s new open source project Service Weaver provides the idea of decoupling the code from how code is deployed. Service Weaver is a programming framework for writing and deploying cloud applications in the Go programming language, where deployment decision can be delegated to an automated runtime. Service Weaver lets you deploy your application as monolith and microservices. Thus, it’s a best of both world of monolith and microservices. With Service Weaver, you write your application as modular monolith, where you modularise your application using components. Components in Service Weaver, modelled as Go interfaces, for which you provide a concrete implementation for your business logic without coupling with networking or serialisation code. A component is a kind of an Actor that represents a computational entity. These modular components, which built around core business logic, can call methods of other components like a local method call regardless of whether these components are running as a modular binary or microservices without using HTTP or RPC. 


Private 5G/LTE growing more slowly than expected

The use cases for private cellular networks are numerous and varied, according to IDC, encompassing everything from wide-area applications like grid networks for utility systems and transport networks to local networks for manufacturing facilities or warehouses. Yet three factors have continued to slow the growth of private cellular, which IDC defines as 5G/LTE networks that don’t share traffic between users, as a public network would. The first is slower-than-expected availability of the latest 5G chipsets, specifically those for releases 17 and 18 from 3GPP — the cellular technology standards body — which are designed to improve ultra-reliable, low-latency communications. That creates a drag on particularly advanced new implementations, particularly in the industrial sector, that can be created with private networks, the report said. In the short-term, that means that LTE will account for the bulk of spending on private cellular networks, according to the report, not to be superseded by 5G spending until 2027. Difficulties with integrating private cellular into existing network infrastructure is also slowing growth, IDC noted. 


Red Hat kicked off a tempest in a teapot

We never seem to learn from history. I was part of the United Linux effort in the early 2000s while working at Novell. Scared by Red Hat’s early popularity, a group of would-be contenders to the Red Hat throne, including SUSE, Turbolinux, Conectiva, and Caldera (which became SCO Group), banded together to try to define a common, competitive distribution. It failed. Completely. As I’ve written, “It turns out the market didn’t want a common Linux distribution created by committee. They wanted the industry standard, which happened to be Red Hat.” Fast forward to 2023, and no one is clamoring for a resurrected United Linux, but CentOS had become a way for people to use RHEL without paying for it. It was, in some ways, a United Linux that actually worked, as it gave the companies behind Rocky and Alma Linux a way to compete without contributing. Now that’s gone, and there’s much hand-wringing over how hard it will be to continue delivering Red Hat’s product for free. Rocky Linux assures us it will be possible, in a poorly named post about this “Brave New World.”


Who Should Pay for Payment Scams - Banks, Telcos, Big Tech?

"The banking sector is the only sector reimbursing at the moment, and our belief is that the burden should be spread. I think tech companies should be putting their hands in their pockets, particularly as they profit from it," said David Postings, chief executive of UK Finance. In a letter last week to Prime Minister Rishi Sunak, a group of major U.K. banks said technology companies must contribute to the cost of the online fraud "pandemic" that is undermining international investor confidence in the U.K. economy, according to a report on Sky News. It makes sense for social media companies and others to be held accountable for scams. Users of Facebook, Instagram, Twitter and other platforms have fallen prey to romance scams, cryptocurrency investment scams and more. But before the government starts looking for ways to ask big tech to contribute, let's not forget about the victims. It might be difficult to prove which platform is liable and for how much. Social media conversations are often fluid and move from one platform to another. Tracing back the conversation and then establishing the responsibility across banks and tech companies could take time. 



Quote for the day:

"Leadership is a two-way street, loyalty up and loyalty down." -- Grace Murray Hopper