Showing posts with label affective computing. Show all posts
Showing posts with label affective computing. Show all posts

Daily Tech Digest - March 04, 2024

Evolving Landscape of ISO Standards for GenAI

The burgeoning field of Generative AI (GenAI) presents immense potential for innovation and societal benefit. However, navigating this landscape responsibly requires addressing potential concerns regarding its development and application. Recognizing this need, the International Organization for Standardization (ISO) has embarked on the crucial task of establishing a comprehensive set of standards. ... A shared understanding of fundamental terminology is vital in any field. ISO/IEC 22989 serves as the cornerstone by establishing a common language within the AI community. This foundational standard precisely defines key terms like “artificial intelligence,” “machine learning,” and “deep learning,” ensuring clear communication and fostering collaboration and knowledge sharing among stakeholders. ... Similar to the need for blueprints in construction, ISO/IEC 23053 provides a robust framework for AI development. This standard outlines a generic structure for AI systems based on machine learning (ML) technology. This framework serves as a guide for developers, enabling them to adopt a systematic approach to designing and implementing GenAI solutions. 


Your Face For Sale: Anyone Can Legally Gather & Market Your Facial Data

We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising. Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law. It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws. Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices”. There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions. The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. 


Affective Computing: Scientists Connect Human Emotions With AI

Affective computing is a multidisciplinary field integrating computer science, engineering, psychology, neuroscience, and other related disciplines. A new and comprehensive review on affective computing was recently published in the journal Intelligent Computing. It outlines recent advancements, challenges, and future trends. Affective computing enables machines to perceive, recognize, understand, and respond to human emotions. It has various applications across different sectors, such as education, healthcare, business services and the integration of science and art. Emotional intelligence plays a significant role in human-machine interactions, and affective computing has the potential to significantly enhance these interactions. ... Affective computing, a field that combines technology with the nuanced understanding of human emotions, is experiencing surges in innovation and related ethical considerations. Innovations identified in the review include emotion-generation techniques that enhance the naturalness of human-computer interactions by increasing the realism of the facial expressions and body movements of avatars and robots. 


The open source problem

Over the years, I’ve trended toward permissive, Apache-style licensing, asserting that it’s better for community development. But is that true? It’s hard to argue against the broad community that develops Linux, for example, which is governed by the GPL. Because freedom is baked into the software, it’s harder (though not impossible) to fracture that community by forking the project. To me, this feels critical, and it’s one reason I’m revisiting the importance of software freedom (GPL, copyleft), and not merely developer/user freedom (Apache). If nothing else, as tedious as the internecine bickering was in the early debates between free software and open source (GPL versus Apache), that tension was good for software, generally. It gave project maintainers a choice in a way they really don’t have today because copyleft options disappeared when cloud came along and never recovered. Even corporations, those “evil overlords” as some believe, tended to use free and open source licenses in the pre-cloud world because they were useful. Today companies invent new licenses because the Free Software Foundation and OSI have been living in the past while software charged into the future. Individual and corporate developers lost choice along the way.


Researchers create AI worms that can spread from one system to another

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research. ... To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say. To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.


Do You Overthink? How to Avoid Analysis Paralysis in Decision Making

Welcome to the world of analysis paralysis. This phenomenon occurs when an influx of information and options leads to overthinking, creating a deadlock in decision-making. Decision makers, driven by the fear of making the wrong choice or seeking the perfect solution, may find themselves caught in a loop of analysis, reevaluation, and hesitation, consequently losing sight of the overall goal. ... Analysis paralysis impacts decision making by stifling risk taking, preventing open dialogue, and constraining innovation—all of which are essential elements for successful technology development. It often leads to mental exhaustion, reduced concentration, and increased stress from endlessly evaluating information, also known as decision fatigue. The implications of analysis paralysis include missed opportunities due to ongoing hesitation and innovative potential being restricted by cautious decision making. ... In the technology sector, the consequences of poor decisions can be far-reaching, potentially unraveling extensive work and achievements. Fear of this happening is heightened due to the sector’s competitive nature. Teams worry that a single misstep could have a cascading negative impact.


30 years of the CISO role – how things have changed since Steve Katz

Katz had no idea what the CISO job was when he accepted it in 1995. Neither did Citicorp. “They said you’ve got a blank cheque, build something great — whatever the heck it is,” Katz recounted during the 2021 podcast. “The CEO said, ‘The board has no idea, just go do something.’” Citicorp gave Katz just two directives after hiring him: “Build the best cybersecurity department in the world” and “go out and spend time with our top international banking customers to limit the damage.” ... today’s CISO must be able to communicate cyber threats in terms that line of business can understand almost instantly. “It’s the ability to articulate risk in a way that is related to the business processes in the organization,” says Fitzgerald. “You need to be able to translate what risk means. Does it mean I can’t run business operations? Does it mean we won’t be able to treat patients in our hospital because we had a ransomware attack?” Deaner says CISOs have an obvious role to play in core infosec initiatives such as implementing a business continuity plan or disaster recovery testing. ... “People in CISO circles absolutely talk a lot about liability. We’re all concerned about it,” Deaner acknowledges. “People are taking the changes to those regulations very seriously because they’re there for a reason.”


Vishing, Smishing Thrive in Gap in Enterprise, CSP Security Views

There is a significant gap between enterprises’ high expectations that their communications service provider will provide the security needed to protect them against voice and messaging scams and the level of security those CSPs offer, according to telecom and cybersecurity software maker Enea. Bad actors and state-sponsored threat groups, armed with the latest generative AI tools, are rushing to exploit that gap, a trend that is apparent in the skyrocketing numbers of smishing (text-based phishing) and vishing (voice-based frauds) that are hitting enterprises and the jump in all phishing categories since the November 2022 release of the ChatGPT chatbot by OpenAI, according to a report this week by Enea. ... “Maintaining and enhancing mobile network security is a never-ending challenge for CSPs,” the report’s authors wrote. “Mobile networks are constantly evolving – and continually being threatened by a range of threat actors who may have different objectives, but all of whom can exploit vulnerabilities and execute breaches that impact millions of subscribers and enterprises and can be highly costly to remediate.”


Causal AI: AI Confesses Why It Did What It Did

Traditional AI models are fixed in time and understand nothing. Causal AI is a different animal entirely. “Causal AI is dynamic, whereas comparable tools are static. Causal AI represents how an event impacts the world later. Such a model can be queried to find out how things might work,” says Brent Field at Infosys Consulting. “On the other hand, traditional machine learning models build a static representation of what correlates with what. They tend not to work well when the world changes, something statisticians call nonergodicity,” he says. It’s important to grok why this one point of nonergodicity is such a crucial difference to almost everything we do. “Nonergodicity is everywhere. It’s this one reason why money managers generally underperform the S&P 500 index funds. It’s why election polls are often off by many percentage points. ... Without knowing the cause of an event or potential outcome, the knowledge we extract from AI is largely backward facing even when it is forward predicting. Outputs based on historical data and events alone are by nature handicapped and sometimes useless. Causal AI seeks to remedy that.


Leveraging power quality intelligence to drive data center sustainability

The challenge is that some data centers lack the power monitoring capabilities necessary for achieving heightened efficiency and sustainability. Moreover, there needs to be more continuous power quality monitoring. Many rely on rudimentary measurements, such as voltage, current, and power parameters, gathered by intelligent rack power distribution units (PDUs), which are then transmitted to DCIM, BMS, and other infrastructure management and monitoring systems. Some consider power quality only during initial setup or occasionally revisit it when reconfiguring IT setups. This underscores the critical role of intelligent PDUs in delivering robust power quality monitoring and the imperative for data center and facility managers to steer efforts toward increased efficiency and sustainability. Certain power quality issues can have detrimental effects on the electrical reliability of a data center, leading to costly unplanned downtime and posing challenges in enhancing sustainability. ... These power quality issues can profoundly affect a data center's functionality and dependability. They may result in unforeseen downtime, harm to equipment, data loss or corruption, and reduced network efficiency. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - August 06, 2023

California Opens Privacy Probe Into Car Data Collection

Modern vehicles are equipped with a wide range of sensors, cameras, and other technologies that generate vast amounts of data. This data includes information about the vehicle’s location, speed, acceleration, braking, and even driver behavior. Additionally, connected car systems can collect data on music preferences, navigation history, and other personal preferences. Car data is collected by various parties, including automakers, technology companies, and third-party service providers. This data is used for a variety of purposes, such as improving vehicle performance, developing new features, and providing personalized services to consumers. However, concerns have been raised about the potential misuse or unauthorized access to this sensitive information. The investigation by the California Privacy Agency highlights the importance of protecting consumer privacy in the context of car data collection. As vehicles become more connected and autonomous, the amount of data being generated increases exponentially. 


An eventful week in the world of Arm and RISC-V

What’s most intriguing though with all of these coincidental events though is the NXP Semiconductors’ announcement. Almost all the initial investor companies announced in the new, unnamed organization, are also Arm licensees. The press release states: “Semiconductor industry players Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc., have come together to jointly invest in a company aimed at advancing the adoption of RISC-V globally by enabling next-generation hardware development.” So, was this strategically timed to coincide with Arm’s annual meet? What’s also intriguing is that the announcement says a new company has been formed but the company isn’t named. Maybe the disclaimer is the added statement that “the company formation will be subject to regulatory approvals in various jurisdictions.” The new unnamed company formed in Germany also “calls on industry associations, leaders, and governments, to join forces in support of this initiative which will help increase the resilience of the broader semiconductor ecosystem.”


How Agile Management Disrupts the Status Quo

As a relatively newer project management methodology, you might wonder how agile differs from the typical or traditional project or team management approach an organization might use—and how it disrupts those traditional approaches. Agile principles are designed to allow for more seamless collaboration, feedback, and flexibility to ensure faster and more thorough success in bringing high-quality products to market. Agile methodology and coaching should focus on bringing together stakeholders, developers, programmers, and end-users to support the underlying principles. This management methodology encourages and facilitates ongoing conversations and regular communication as a primary means of measuring progress with incremental development. However, “incremental” movement doesn’t necessarily translate to slowing down the process. In fact, team member input—and, importantly, user input—ultimately allows for a more effective, functional, and satisfying final product.


A Journey Through Software Development Paradigms

In the quest for seamless collaboration and integration between development and operations, we encounter DevOps, a paradigm that bridges the gap between siloed teams and fosters a culture of continuous integration, delivery, and learning. We explore the triumphs and challenges faced by organizations adopting DevOps, witnessing its potential to accelerate software delivery, improve quality, and enhance customer experiences. Beyond the familiar shores of Agile and DevOps, our journey ventures into the uncharted territories of emerging paradigms, each holding the promise of further transformation. Lean Software Development, Continuous Delivery, and Site Reliability Engineering (SRE) await our exploration, revealing new insights and practices that continue to shape the future of software development. As we reach the culmination of our voyage, we stand in awe of the pioneers and visionaries who have paved the way for progress, embracing adaptation and innovation in the pursuit of excellence. 


The Rise of Emotionally Aware Technology: A Deep Dive into Global Affective Computing

One of the key drivers behind the rise of affective computing is the increasing demand for personalized user experiences. Today’s consumers expect their devices to understand their needs and preferences and to respond accordingly. Emotionally aware technology can meet these expectations by adapting its responses based on the user’s emotional state. For example, a virtual assistant that can detect frustration in a user’s voice could offer to simplify its instructions or provide additional support. Another factor contributing to the growth of affective computing is the advancement in machine learning and AI technologies. These technologies enable computers to learn from data and improve their performance over time, making it possible for them to recognize and interpret complex human emotions. For instance, facial recognition software can now analyze subtle facial expressions to determine a person’s mood, while natural language processing can interpret the emotional tone in written text.


Digital twins: The key to smart product development

In advanced industries, survey data indicate that almost 75 percent of companies have already adopted digital-twin technologies that have achieved at least medium levels of complexity. There is significant variance between sectors, however. Players in the automotive—and aerospace and defense—industries appear to be more advanced in their use of digital twins today, while logistics, infrastructure, and energy players are more likely to be developing their first digital-twin concepts. One major aerospace company is developing a machine-learning-based geometry optimization system that can simulate thousands of different configurations at high speed to identify weight savings, aerodynamic improvements, and other performance benefits. A European software company is building a multiphysics model of the human heart to support drug and medical-device development. In the United States, an automotive company is building a system that can model all the software and hardware configurations it offers. The system will be used to simulate the effect of design improvements before they are delivered to customers as over-the-air updates. 


Four technology disruptions organizations must watch

Digital humans are becoming more and more like real people. They are readily available and have the ability to interact over a screen to handle a service-based issue or provide customer service instantly. As digital human software is integrated with natural language processing and robotic process automation tools, digital humans will become more of a presence in workflows of more and more processes. Consulting leaders should focus, both singly and in tandem, with leaders of other parts of an organization, on crafting approaches their clients can use to leverage a digital human workforce. Service delivery leaders — particularly within business process outsourcing providers — should be developing a strategy to deploy digital humans within their service delivery functions. ... A decentralized autonomous organization (DAO) is a digital entity, running on a blockchain (which provides a secure digital ledger for communication tracking), that can engage in business interactions with other DAOs, digital and human agents, as well as corporations, without conventional human management.


Bitcoin Beyond the Currency – the Disruption of Industries

The Bitcoin economy has the potential to become the biggest economy in the world; bigger than the United States or China. Bitcoin is a solution for everyone in the world who lives in fear of inflation risk, currency risk, or regime risk. A global, decentralized, trustless settlement layer and means of exchange with no state backing or intervention. For that to happen, BTC has to be more than a store value, it has to be a currency. We have to stop thinking about it in terms of market capitalization and start thinking about it in terms of a gross decentralized product, the “GDP” of the Bitcoin economy. One doesn’t talk about the market capitalization of the dollar, we shouldn’t think of Bitcoin in those terms either. Bitcoin is continuing to become increasingly vital as legacy institutions fall behind the strides being made in the technology sector. These breakthroughs are significantly disrupting incumbent industries ranging from those commonly considered such as banking and finance, to more unique industries such as insurance and energy.


Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

Keep in mind: despite their capabilities, large language models can’t tell between what’s real and what’s not. And when asked to verify if something is true, they “frequently invent dates, facts, and figures.” While this stresses the importance of fact-checking on the end-user’s part, you could still face a lawsuit for defamation if any misleading information is published or shared with the public. In fact, ChatGPT-creator OpenAI is already being sued for libel after the system made false accusations against a radio host in the United States, claiming that he had embezzled funds from a non-profit organization. This is the first case of this nature against OpenAI, which could test the legal viability of any future AI-related defamation lawsuits. However, some legal experts believe the case may be challenging to maintain since there were no actual damages and OpenAI wasn’t notified about the claims or given the opportunity to remove them. Beyond defamation, tech firms that deploy large language models in user support systems can also face general liability risks relating to physical harm.


Data Democratization’s Impact on Users and Governance

A key result of increased user involvement in the nuts and bolts of data is the increased importance of data literacy throughout the organization, Stodder added. “It’s essential for organizations to understand what their current capabilities are and to make a plan to address any stumbling block they’re having.” Training tailored to the full range of user personas, from advanced users to more basic data consumers, will be critical to any data democratization effort. ... Another critical aspect of a democratization effort is an effective governance program. “Organizations can easily expand their data programs faster than they expand their governance programs,” Stodder explained, “which, given the existing strain placed on governance by regulations and the complexity of the data landscape, can only compound the problems.” Some of these governance issues can also be exacerbated by the distributed nature of a democratized landscape. “Many organizations are trying to consolidate to a kind of hub-and-spoke model,” Stodder said, “which has been effective for many of them. 



Quote for the day:

“When something is important enough, you do it even if the odds are not in your favor.” --
Elon Musk