Daily Tech Digest - September 05, 2024

What Does the Car of the Future Look Like?

Enabled with IoT, the vehicles stay in sync with their environments. The ConnectedDrive feature, for example, enables predictive maintenance by using IoT sensors to monitor vehicle health and performance in real time and notify drivers about upcoming maintenance needs. IoT also paves the way for vehicle-to-infrastructure, or V2X, communication, which enables BMW cars to interact with traffic lights, road signs and other vehicles. But a smart car is more than just internet-connected. ... The next leap in sensor technology is quantum sensing. Image generation systems based on infrared, ultrasound and radar are already in use. But with multisensory systems, BMW vehicles will not only be able to detect potential hazards more accurately but also predict and prevent damage - a capability crucial for automated and autonomous driving systems. These sensors will allow vehicles to "feel" their surroundings, enabling more refined surface control and the ability to perform complex tasks, such as the automated assembly of intricate components. Predictive maintenance, powered by multisensory input, will serve as an early warning system in production, reducing downtime.


NIST Cybersecurity Framework (CSF) and CTEM – Better Together

CSF's core functions align well with the CTEM approach, which involves identifying and prioritizing threats, assessing the organization's vulnerability to those threats, and continuously monitoring for signs of compromise. Adopting CTEM empowers cybersecurity leaders to significantly mature their organization's NIST CSF compliance. Prior to CTEM, periodic vulnerability assessments and penetration testing to find and fix vulnerabilities was considered the gold standard for threat exposure management. The problem was, of course, that these methods only offered a snapshot of security posture – one that was often outdated before it was even analyzed. CTEM has come to change all this. The program delineates how to achieve continuous insights into the organizational attack surface, proactively identifying and mitigating vulnerabilities and exposures before attackers exploit them. To make this happen, CTEM programs integrate advanced tech like exposure assessment, security validation, automated security validation, attack surface management, and risk prioritization.


Leveling Up to Responsible AI Through Simulations

This simulation highlighted the challenges and opportunities involved in embedding responsible AI practices within Agile development environments. The lessons learned from this exercise are clear: expertise, while essential, must be balanced with cross-disciplinary collaboration; incentives need to be aligned with ethical outcomes; and effective communication and documentation are crucial for ensuring accountability. Moving forward, organizations must prioritize the development of frameworks and cultures that support responsible AI. This includes creating opportunities for ongoing education and reflection, fostering environments where diverse perspectives are valued, and ensuring that all stakeholders—from engineers to policymakers—are equipped and incentivized to navigate the complexities of responsible Agile AI development. Simulations like the one we conducted are a valuable tool in this effort. By providing a realistic, immersive experience, they help professionals from diverse backgrounds understand the challenges of responsible AI development and prepare them to meet these challenges in their own work. As AI continues to evolve and become increasingly integrated into our lives, the need for responsible development practices will only grow.


What software supply chain security really means

Upon reflection, the “supply chain” aspect of software supply chain security suggests the crucial ingredient of an improved definition. Software producers, like manufacturers, have a supply chain. And software producers, like manufacturers, require inputs and then perform a manufacturing process to build a finished product. In other words, a software producer uses components, developed by third parties and themselves, and technologies to write, build, and distribute software. A vulnerability or compromise of this chain, whether done via malicious code or via the exploitation of an unintentional vulnerability, is what defines software supply chain security. I should mention that a similar, rival data set maintained by the Atlantic Council uses this broader definition. I admit to still having one general reservation about this definition: It can feel like software supply chain security subsumes all of software security, especially the sub-discipline often called application security. When a developer writes a buffer overflow in the open source software library your application depends upon, is that application security? Yep! Is that also software supply chain security?


Data privacy and security in AI-driven testing

As the technology has become more accepted and widespread, the focus has shifted from disbelief in its capabilities to a deep concern for how it handles sensitive data. At Typemock, we’ve adapted to this shift by ensuring that our AI-driven tools not only deliver powerful testing capabilities but also prioritize data security at every level. ... While concerns about IP leakage and data permanence are significant today, there is a growing shift in how people perceive data sharing. Just as people now share everything online, often too loosely in my opinion, there is a gradual acceptance of data sharing in AI-driven contexts, provided it is done securely and transparently.Greater Awareness and Education: In the future, as people become more educated about the risks and benefits of AI, the fear surrounding data privacy may diminish. However, this will also require continued advancements in AI security measures to maintain trust. Innovative Security Solutions: The evolution of AI technology will likely bring new security solutions that can better address concerns about data permanence and IP leakage. These solutions will help balance the benefits of AI-driven testing with the need for robust data protection.


QA's Dead: Where Do We Go From Here?

Developers are now the first line of quality control. This is possible through two initiatives. First, iterative development. Agile methodologies mean teams now work in short sprints, delivering functional software more frequently. This allows for continuous testing and feedback, catching issues earlier in the process. It also means that quality is no longer a final checkpoint but an ongoing consideration throughout the development cycle. Second, tooling. Automated testing frameworks, CI/CD pipelines, and code quality tools have allowed developers to take on more quality control responsibilities without risking burnout. These tools allow for instant feedback on code quality, automated testing on every commit, and integration of quality checks into the development workflow. ... The first opportunity is down the stack, moving into more technical roles. QA professionals can leverage their quality-focused mindset to become automation specialists or DevOps engineers. Their expertise in thorough testing can be crucial in developing robust, reliable automated test suites. The concept that "flaky tests are worse than no tests" becomes even more critical when the tests are all that stop an organization from shipping low-quality code.


Serverless Is Trending Again in Modern Application Development

A better definition has emerged as serverless becomes a path to developer productivity. The term "serverless" was always a misnomer and, even among end users and vendors, tended to mean different things depending on product and use case. Just as the cloud is someone else's computer, serverless is still someone else's server. Today, things are much clearer. A serverless application is a software component that runs inside of an environment that manages the underlying complexity of deployment, runtimes, protocols, and process isolation so that developers can focus on their code. Enterprise success stories delivered proven, repeatable use case solutions. The initial hype around serverless centered around fast development cycles and back-end use cases where serverless functions acted as the glue between disparate cloud services. ... Since then, we've seen many more enterprise customers taking advantage of serverless. An expanded ecosystem of ancillary services drives emerging use cases. The core use case of serverless remains building lightweight, short-running ephemeral functions.


New AI standards group wants to make data scraping opt-in

The Dataset Providers Alliance, a trade group formed this summer, wants to make the AI industry more standardized and fair. To that end, it has just released a position paper outlining its stances on major AI-related issues. The alliance is made up of seven AI licensing companies, including music copyright-management firm Rightsify, Japanese stock-photo marketplace Pixta, and generative-AI copyright-licensing startup Calliope Networks. ... The DPA advocates for an opt-in system, meaning that data can be used only after consent is explicitly given by creators and rights holders. This represents a significant departure from the way most major AI companies operate. Some have developed their own opt-out systems, which put the burden on data owners to pull their work on a case-by-case basis. Others offer no opt-outs whatsoever. The DPA, which expects members to adhere to its opt-in rule, sees that route as the far more ethical one. “Artists and creators should be on board,” says Alex Bestall, CEO of Rightsify and the music-data-licensing company Global Copyright Exchange, who spearheaded the effort. Bestall sees opt-in as a pragmatic approach as well as a moral one: “Selling publicly available datasets is one way to get sued and have no credibility.”


AI potential outweighs deepfake risks only with effective governance: UN

“AI must serve humanity equitably and safely,” Guterres says. “Left unchecked, the dangers posed by artificial intelligence could have serious implications for democracy, peace and stability. Yet, AI has the potential to promote and enhance full and active public participation, equality, security and human development. To seize these opportunities, it is critical to ensure effective governance of AI at all levels, including internationally.” ... The flurry of laws also concern worker protections – which in Hollywood means protecting actors and voice actors from being replaced with deepfake AI clones. Per AP, the measure mirrors language in the deal SAG-AFTRA made with movie studios last December. The state is also to consider imposing penalties on those who clone the dead without obtaining consent from the deceased’s estate – a bizarre but very real concern, as late celebrities begin popping up in studio films. ... If you find yourself suffering from deepfake despair, Siddharth Gandhi is here to remind you that there are remedies. Writing in ET Edge, the COO of 1Kosmos for Asia Pacific says strong security is possible by pairing liveness detection with device-based algorithmic systems that can detect injection attacks in real-time.


Red Hat delivers AI-optimized Linux platform

RHEL AI helps enterprises get away from the “one model to rule them all” approach to generative AI, which is not only expensive but can lock enterprises into a single vendor. There are now open-source large language models available that rival those available from the commercial vendors in performance. “And there are smaller models,” Katarki adds, “which are truly aligned to your specific use cases and your data. They offer much better ROI and much better overall costs compared to large language models in general.” And not only the models themselves but the tools needed to train them are also available from the open-source community. “The open-source ecosystem is really fueling generative AI, just like Linux and open source powered the cloud revolution,” Katarki says. In addition to allowing enterprises to run generative AI on their own hardware, RHEL AI also supports a “bring your own subscription” for public cloud users. At launch, RHEL AI supports AWS and the IBM cloud. “We’ll be following that with Azure and GCP in the fourth quarter,” Katarki says. RHEL AI also has guardrails and agentic AI on its roadmap. “Guardrails and safety are one of the value-adds of Instruct Lab and RHEL AI,” he says.



Quote for the day:

"Without continual growth and progress, such words as improvement, achievement, and success have no meaning." -- Benjamin Franklin

Daily Tech Digest - September 04, 2024

What is HTTP/3? The next-generation web protocol

HTTPS will still be used as a mechanism for establishing secure connections, but traffic will be encrypted at the HTTP/3 level. Another way to say it is that TLS will be integrated into the network protocol instead of working alongside it. So, encryption will be moved into the transport layer and out of the app layer. This means more security by default—even the headers in HTTP/3 are encrypted—but there is a corresponding cost in CPU load. Overall, the idea is that communication will be faster due to improvements in how encryption is negotiated, and it will be simpler because it will be built-in at a lower level, avoiding the problems that arise from a diversity of implementations. ... In TCP, that continuity isn’t possible because the protocol only understands the IP address and port number. If either of those changes—as when you walk from one network to another while holding a mobile device—an entirely new connection must be established. This reconnection leads to a predictable performance degradation. The QUIC protocol introduces connection IDs or CIDs. For security, these are actually CID sets negotiated by the server and client. 


6 things hackers know that they don’t want security pros to know that they know

It’s not a coincidence that many attacks happen at the most challenging of times. Hackers really do increase their attacks on weekends and holidays when security teams are lean. And they’re more likely to strike right before lunchtime and end-of-day, when workers are rushing and consequently less attentive to red flags indicating a phishing attack or fraudulent activity. “Hackers typically deploy their attacks during those times because they’re less likely to be noticed,” says Melissa DeOrio, global threat intelligence lead at S-RM, a global intelligence and cybersecurity consultancy. ... Threat actors actively engage in open-source intelligence (OSINT) gathering, looking for information they can use to devise attacks, Carruthers says. It’s not surprising that hackers look for news about transformative events such as big layoffs, mergers and the like, she says. But CISOs, their teams and other executives may be surprised to learn that hackers also look for news about seemingly innocuous events such as technology implementations, new partnerships, hiring sprees, and executive schedules that could reveal when they’re out of the office.


Take the ‘Shift Left’ Approach a Step Further by ‘Starting Left’

This makes it vital to guarantee code quality and security from the start so that nothing slips through the cracks. Shift left accounts for this. It minimizes risks of bugs and vulnerabilities by introducing code testing and analysis earlier in the SLDC, catching problems before they mount and become trickier to solve or even find. Advancing testing activities earlier puts DevOps teams in a position to deliver superior-quality software to customers with greater frequency. As a practice, “shift left” requires a lot more vigilance in today’s security landscape. But most development teams don’t have the mental (or physical) bandwidth to do it properly — even though it should be an intrinsic part of code development strategy. In fact, the Linux Foundation revealed in a study recently that almost one-third of developers aren’t familiar with secure software development practices. “Shifting left” — performing analysis and code reviews earlier in the development process — is a popular mindset for creating better software. What the mindset should be, though, is to “start left,” not just impose the burden later on in the SDLC for developers. ... This mindset of “start left” focuses not only on an approach that values testing early and often, but also on using the best tools to do so. 


ONCD Unveils BGP Security Road Map Amid Rising Threats

The guidance comes amid an intensified threat landscape for BGP, which serves as the backbone of global internet traffic routing. BGP is a foundational yet vulnerable protocol, developed at a time when many of today's cybersecurity risks did not exist. Coker said the ONCD is committed to covering at least 60% of the federal government's IP space by registration service agreements "by the end of this calendar year." His office recently led an effort to develop a federal RSA template that federal agencies can use to facilitate their adoption of Resource Public Key Infrastructure, which can be used to mitigate BGP vulnerabilities. ... The ONCD report underscores how BGP "does not provide adequate security and resilience features" and lacks critical security capabilities, including the ability to validate the authority of remote networks to originate route announcements and to ensure the authenticity and integrity of routing information. The guidance tasks network operators with developing and periodically updating cybersecurity risk management plans that explicitly address internet routing security and resilience. It also instructs operators to identify all information systems and services internal to the organization that require internet access and assess the criticality of maintaining those routes for each address.


Efficient DevSecOps Workflows With a Little Help From AI

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review. AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages. ... Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization. Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.


PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

“AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value. “But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value. “In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.” ... You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.


What Is Active Metadata and Why Does It Matter?

Active metadata’s ability to update automatically whenever the data it describes changes now extends beyond the data profile itself to enhance the management of data access, classification, and quality. Passive metadata’s static nature limits its use to data discovery, but the dynamic nature of active metadata delivers real-time insights into the data’s lineage to help automate data governance: Get a 360-degree view of data - Active metadata’s ability to auto-update ensures that metadata delivers complete and up-to-date descriptions of the data’s lineage, context, and quality. Companies can tell at a glance whether the data is being used effectively, appropriately, and in compliance with applicable regulations. Monitor data quality in real time - Automatic metadata updates improve data quality management by providing up-to-the-minute metrics on data completeness, accuracy, and consistency. This allows organizations to identify and respond to potential data problems before they affect the business. Patch potential governance holes - Active metadata allows data governance rules to be enforced automatically to safeguard access to the data, ensure it’s appropriately classified, and confirm it meets all data retention requirements. 


How to Get IT and Security Teams to Work Together Effectively

Successful collaboration requires a sense of shared mission, Preuss says. Transparency is crucial. "Leverage technology and automation to effectively share information and challenges across both teams," she advises. Building and practicing trust and communication in an environment that's outside the norm is also essential. One way to do so is by conducting joint business resilience drills. "Whether a cyber war game or an environmental crisis [exercise], resilience drills are one way to test the collaboration between teams before an event occurs." ... When it comes to cross-team collaboration, Scott says it's important for members to understand their communication style as well as the communication styles of the people they work with. "At Immuta, we do this through a DiSC assessment, which each employee is invited to complete upon joining the company." To build an overall sense of cooperation and teamwork, Jeff Orr, director of research, digital technology at technology research and advisory firm ISG, suggests launching an exercise simulation in which both teams are required to collaborate in order to succeed. 


Protecting national interests: Balancing cybersecurity and operational realities

A significant challenge we face today is safeguarding the information space against misinformation, disinformation, manipulation and deceptive content. Whether this is at the behest of nation-states, or their supporters, it can be immensely destabilising and disruptive. We must find a way to tackle this challenge, but this should not just focus on the responsibilities held by social media platforms, but also on how we can detect targeted misinformation, counter those narratives and block the sources. Technology companies have a key role in taking down content that is obviously malicious, but we need the processes to respond in hours, rather than days and weeks. More generally, infrastructure used to launch attacks can be spun up more quickly than ever and attacks manifest at speed. This requires the government to work more closely with major technology and telecommunication providers so we can block and counter these threats – and that demands information sharing mechanisms and legal frameworks which enable this. Investigating and countering modern transnational cybercrime demands very different approaches, and of course AI will undoubtedly play a big part in this, but sadly both in attack and defence.


How leading CIOs cultivate business-centric IT

With digital strategy and technology as the brains behind most business functions and operating models, IT organizations are determined to inject more business-centricity into their employee DNA. IT leaders have been burnishing their business acumen and embracing a non-technical remit for some time. Now, there’s a growing desire to infuse that mentality throughout the greater IT organization, stretching beyond basic business-IT alignment to creating a collaborative force hyper-fixated on channeling innovation to advance enterprise business goals. “IT is no longer the group in the rear with the gear,” says Sabina Ewing, senior vice president of business and technology services and CIO at Abbott Laboratories. ... While those with robust experience and expertise in highly technical areas such as cloud architecture or cybersecurity are still highly coveted, IT organizations like Duke Health, ServiceNow, and others are also seeking a very different type of persona. Zoetis, a leading animal health care company, casts a wider net when seeking tech and digital talent, focusing on those who are collaborative, passionate about making a difference, and adaptable to change. Candidates should also have a strong understanding of technology application, says CIO Keith Sarbaugh.



Quote for the day:

''When someone tells me no, it doesn't mean I can't do it, it simply means I can't do it with them.'' -- Karen E. Quinones Miller

Daily Tech Digest - September 03, 2024

Cloud application portability remains unrealistic

Enterprises can deploy an application across multiple cloud providers to distribute risk and reduce dependency on a single vendor. This strategy also offers leverage when negotiating terms or migrating services. It may prevent vendor lock-in and provide flexibility to optimize costs by leveraging the most cost-effective services available from different providers. That said, you’d be wrong if you think multicloud is the answer to a lack of portability. You’ll have to attach your application to native features to optimize them for the specific cloud provider. As I’ve said, portability has been derailed, and you don’t have good options. A “multiple providers” approach minimizes the negative impact but does not solve the portability problem. Build applications with portability in mind. This approach involves containerization technologies, such as Docker, and orchestration platforms, such as Kubernetes. Abstracting applications from the underlying infrastructure ensures they are compatible with multiple environments. Additionally, avoiding proprietary services and opting for open source tools can enhance portability and reduce costs associated with reconfigurations or migrations. 


Will Data Centers in Orbit Launch a New Phase of Sustainability?

Space offers an appealing solution for many of the problems that plague terrestrial data centers. Space-based data centers could use solar arrays to draw power from the sun, alleviating the burden on electrical grids here on Earth. They would not require water for cooling. They would not take up land, disturb people or wildlife. Additionally, natural disasters that can damage or wipe out data centers on Earth -- earthquakes, wildfires, floods, tsunamis -- are a non-issue in space. ... While the upsides of data centers in space are easy to imagine, what will it take to make them a reality? The Advanced Space Cloud for European Net zero emission and Data sovereignty (ASCEND) study set out to answer questions about space data centers technical feasibility and their environmental benefits. The study is funded by the European Commission as part of the Horizon Europe, a scientific research program. Thales Alenia Space led the study with a consortium of 11 partners, including research organizations and industrial companies from five European countries. Thales Alenia Space announced the results of the 16-month study at the end of June. 


Workload Protection in the Cloud: Why It Matters More Than Ever

CWP is a necessity that must not be ignored. As the adoption of cloud technology grows, the scale and complexity of threats also escalate. Here are the reasons why CWP is critical: Increased threat environment: Cyber threats are becoming more complex and frequent. CWP tools are crafted to detect and counter these changing threats in real time, delivering enhanced protection for cloud workloads exposed across various networks and environments. Protection against data breaches and compliance: Data breaches can lead to severe financial and reputational harm. CWP tools assist organizations in complying with strict regulations like GDPR, HIPAA, and PCI-DSS by implementing strong security protocols and compliance checks. Maintenance of operational integrity: It is essential for businesses to maintain the uninterrupted operation of their cloud workloads without being affected by security incidents. CWP tools offer extensive threat detection and automated responses, minimizing disruptions and upholding operational integrity. Cost implications: Security breaches can incur substantial costs. Investing in CWP tools helps avert these risks by early identification of vulnerabilities and threats, finally protecting organizations from potential financial losses due to breaches and service interruptions.


How Human-Informed AI Leads to More Accurate Digital Twins

The value of a DT is directly proportional to its accuracy, which in turn depends on the data available. But data availability remains a challenge — ironically, often in the business use cases that could benefit the most from DTs — and it’s a big reason why DTs are still in their infancy. DTs could help guide the expansion of current products to new market domains, accelerating R&D and innovation by enabling virtual experimentation. But research activities often involve exploring new territory where data is scarce or protected by patents owned by other organizations. For example, while DTs could inform an organization’s understanding of how a new topology may affect heavy construction equipment or how a smart building may behave under unusual weather conditions, there is limited data available about these new domains. ... DTs can add immense value by reducing costs and the time it takes to develop new processes, but data to develop these models is limited given that the work explores new territory. Further, data-sharing across the supply chain is sharply limited due to extreme sensitivity about intellectual property.


Leveraging AI for enhanced crime scene investigation

Importantly, as crimes are committed or solved, the algorithms and software based on them become more sophisticated. Interestingly, these algorithms use information obtained from various sources without any human intervention, reducing the chances of bias or error. With the increasing use of mobile phones and the internet, information is flooding in the form of photos, videos, audios, emails, letters, newspaper reports, speeches, social media posts, locations, and more. Various AI & ML-based algorithms are used to quickly analyse this data, perform mathematical transformations, draw inferences, and reach conclusions. This makes it possible to predict the likelihood of crimes in a very short time, which is almost impossible otherwise. A smart city-related company in Israel called ‘Cortica’ has developed software that analyzes the information obtained through CCTV. This software utilizes certain AI algorithms to recognize the faces in a crowd, identify crowd behavior and movement, and predict the likelihood and nature of a crime. Interestingly, these intelligent algorithms make it possible to analyze several terabytes of video footage in minimal time and make quite precise inferences. 


There are many reasons why companies struggle to exploit generative AI

Some qualitative remarks by executives interviewed revealed more detail on where that lack of preparedness lies. For example, a former vice president of data and intelligence for a media company told Rowan and team that the "biggest scaling challenge" for the company "was really the amount of data that we had access to and the lack of proper data management maturity." The executive continued: "There was no formal data catalog. There was no formal metadata and labeling of data points across the enterprise. We could go only as fast as we could label the data." ... Uncertainty about novel regulations is also causing companies to pause and think, Rowan and team stated in the report: "Organizations were exceedingly uncertain about the regulatory environment that may exist in the future (depending on the countries they operate in)." In response to both concerns, companies are pursuing a variety of strategies, Rowan and team found. These strategies include: "shut off access to specific Generative AI tools for staff"; "put in place guidelines to prevent staff from entering organizational data into public LLMs"; and "build walled gardens in private clouds with safeguards to prevent data leakage into the public cloud."


The role of behavioral biometrics in a world of growing cyberthreats

Behavioral biometrics might be an evolving form of biometric technology, but its foundations are already quite well established. For retail and ecommerce, for example, the lines blur slightly between the terms, ‘behavioral biometrics’ and ‘risk-based authentication’. Behavior in this sense isn’t just how people interact with their device, but the location they’re ordering from and to, or the time zone and time of day they’re looking to make a purchase. The extent of risk rises up and down relative to what is deemed ‘typical behavior’ in the broader sense and for that individual transaction. ‘Risk’ refers to the degree of confidence in authentication accuracy and will be key to the rise of behavioral biometrics in other industries too, including healthcare and banking where it is already being deployed to varying extents. It is more about the use case and whether the risk posed is suitable for passive authentication in these cases. In healthcare, for example, passive authentication wouldn’t be sufficient to access patient databases, but once logged in, it could help confirm that the same user is still active or online. ... Aside from the securitization element, behavioral biometrics can also enable improved personalization and marketing strategies. 


Data center sustainability is no longer optional

A recent empirical investigation conducted by the Borderstep Institute, in collaboration with the EU, revealed that digital technologies already account for approximately five-nine percent of global electricity consumption and carbon emissions, a number expected to increase as the demand for compute power, driven by the rise of generative artificial intelligence (gen AI) and foundation models, continues to grow. ... Databases are a significant contributor to data center workloads. They are critical for storing, managing, and retrieving large volumes of data, are computationally intensive, and significantly contribute to the overall energy consumption of data centers on thousands of database instances. Therefore, artificial intelligence database tuning will be central to any sustainability strategy to increase efficiency. ... Artificial intelligence database tuning offers a revolutionary approach to database management, enabling businesses to achieve high database performance while minimizing their environmental impact. By observing real-time data, AI can identify more effective PostgreSQL configurations that minimize energy usage. 


Building an Accessible Future in the Private Sector

Just like the public sector must make its services accessible to all groups, so must the private sector. Luckily, several regulations make accessibility a legal requirement for the private sector. The most notable is the Americans with Disabilities Act (ADA), a federal law passed in 1990 to prohibit discrimination against people with disabilities in many areas of public life. Title III of the ADA considers websites "public accommodations" and mandates that people with disabilities have equal access. However, true digital accessibility in the modern age needs to go further to ensure all digital products — websites, kiosks, mobile, and web applications — are equally accessible to people with disabilities. ... Companies leading the charge on accessibility are viewed as socially responsible and inclusive, attributes that matter to this generation of consumers. Organizations that value cultivating relationships with diverse customer groups often experience stronger customer loyalty. Brands like Apple and Microsoft are shining examples and have long been praised for providing inclusive technology and experiences. 


How to ensure cybersecurity strategies align with the company’s risk tolerance

One way for CISOs to align cybersecurity strategies with organizational risk tolerance is strategic involvement across the organization. “By forming risk committees and engaging in business discussions, CISOs can better understand and address the risks associated with new technologies and initiatives, and support the organization’s overall strategy,” Carmichael says. An information security committee is vital to this mission, according to Carl Grifka, MD of SingerLewak LLP, an advisory firm that specializes in risk and cybersecurity. “There needs to be a regular assessment of not just the cybersecurity environment, but also the risk tolerance and risk appetite, which is going to drive the controls that we’re going to put in place,” Grifka tells CSO. The committee operates as a cross-functional team that brings together different members of the business, including the executive, IT, security and maybe even a board representative on a more regular basis. Organizations low on the maturity level probably need to meet every couple of weeks, especially if they’re in a remediation phase and working to reduce gaps in the security posture. 



Quote for the day:

"Those who have succeeded at anything and don’t mention luck are kidding themselves." -- Larry King

Daily Tech Digest - September 02, 2024

AI Demands More Than Just Technical Skills From Developers

Unlike in the past, when developers took instructions from a team lead and executed tasks as individual contributors, now they’re outsourcing problem-solving and code generation to AI tools and models. By partnering with GenAI to solve complex problems, developers who were once individual contributors are now becoming team leads in their own right. This new workflow requires developers to elevate their critical-thinking skills and empathy for end-users. No longer can they afford to operate with a superficial understanding of the task at hand. Now, it’s paramount that developers understand the why that is driving their initiative so that they can lead their AI counterparts to the most desirable outcomes. ... Developers are now co-creating IP. Who owns the IP? Does the prompt engineer? Does the GenAI tool? If developers write code with a certain tool, do they own that code? In an industry where tool sets are moving so quickly, it varies based on what tool you’re using, what version of the tool, and what different tools within certain vendors even have different rules. Intellectual property rights are evolving.


Embracing Neurodiversity in IT Workplace to Bridge Talent Gaps

To accommodate neurodiversity effectively, organizations must adopt a multifaceted approach. This includes providing tailored support and resources to neurodiverse employees, such as flexible work arrangements, assistive technologies, and specialized training programs. Additionally, fostering open communication and creating a supportive network of colleagues and mentors can help neurodiverse individuals feel valued and empowered to contribute their unique insights and perspectives. ... The first step, according to Leantime CEO and co-founder Gloria Folaron, is to create a cultural expectation of self-awareness — from leadership to human resources. "The self-awareness can extend across any biases you might have, relationships, or negative experiences or reactions that exist inside. It's a self-checking mechanism," she said. The second benefit of this is that, for many neurodivergent individuals, they have not been well-supported in the past — they've been forced to create their own systems to fit into more traditional work environments. By promoting even employee-level self-awareness, they become empowered to start thinking about their own needs.


Ransomware recovery: 8 steps to successfully restore from backup

Use either physical write-once-read-many (WORM) technology or virtual equivalents that allow data to be written but not changed. This does increase the cost of backups since it requires substantially more storage. Some backup technologies only save changed and updated files or use other deduplication technology to keep from having multiple copies of the same thing in the archive. ... In addition to keeping the backup files themselves safe from attackers, companies should also ensure that their data catalogs are safe. “Most of the sophisticated ransomware attacks target the backup catalog and not the actual backup media, the backup tapes or disks, as most people think,” says Amr Ahmed, EY America’s infrastructure and service resiliency leader. This catalog contains all the metadata for the backups, the index, the bar codes of the tapes, the full paths to data content on disks, and so on. “Your backup media will be unusable without the catalog,” Ahmed says. Restoring without one would be extremely hard or impractical. Enterprises need to ensure that they have in place a backup solution that includes protections for the backup catalog, such as an air gap.


Complying with PCI DSS requirements by 2025

Perhaps one of the most significant changes in terms of preventing e-commerce fraud is the requirement to deploy change-and-tamper-detection mechanisms to alert for unauthorized modifications to the HTTP headers and the contents of payment pages as received by the consumer browser (11.6.1). Most e-commerce-related cardholder data (CHD) theft comes from the abuse of JavaScript used within online stores (otherwise known as web-based skimming). Recent research has shown that most website payment pages have 100 different scripts, some of which come from the merchant itself and some from third parties, and any one of these scripts can potentially be altered to harvest cardholder data. Equally, this could be the payment page of a payment service provider (PSP) which a merchant redirects to, or uses a PSP generated inline frame (iframe), making this an issue that is also relevant to PSPs. The ideal scenario is to reduce this risk by knowing what is in use, what is authorized and has not been altered, which is the principle aim of requirement 6.4.3. This mandates the inventory of scripts, their authorization, evidence that they are necessary and have been validated.


Inside CISA's Unprecedented Election Security Mission

Despite ongoing efforts by foreign adversaries to influence U.S. elections, attempts to subvert the vote have been largely unsuccessful in past elections. CISA's continued expansion of advanced threat detection and response strategies in 2016 and 2020 played a significant role in thwarting attempts by Russia and others to compromise the integrity of the electoral process. The agency has recently issued warnings about "increasingly aggressive Iranian activity during this election cycle," including reported activities to compromise former President Donald Trump's campaign. The Department of Homeland Security designated election infrastructure as a subset of the government facilities sector in 2017, further recognizing the vast networks of voter registration databases, information technology systems, polling places and voting systems as critical infrastructure. ... The agency over the last six years has rolled out a wide range of no-cost voluntary services and resources aimed at reducing risks to election infrastructure, including vulnerability scanning, physical security assessments and supporting the nationwide adoption of .gov domains, which experts say enhance trust by ensuring that election information is verified and comes from official, credible sources.


The Gen Z Guide to Getting Ahead at Work

As a young person entering the workplace with new ideas and fresh eyes and perspectives, you have unique value, experts said. Don't be shy to share your thoughts. You might know something others don't. That could look like sharing tools or shortcuts you know within apps, ideas or stories about how you've solved problems in the past, Paaras said. You might have valuable experience related to a particular topic or insight into how other people your age see things. Or you might be able to spot the inefficiency or error of how things are regularly done. "You're seeing things for the first time, and you can highlight that," Abrahams said. "Focus on the value you bring." ... Set time aside for chatting, by video or in person, with your colleagues and supervisor. Building good relationships can help foster people's trust and willingness to collaborate with you. It also could be a differentiator in your career advancement. "Your presence needs to be felt by others," Wilk said. Seek out one-on-one meetings and casual conversations. Be ready with thoughts, questions and goals for the conversation, Wilk said. When in doubt, remember people love to talk about themselves, she added. Ask them about their career or experience on the job.


Unified Data: The Missing Piece to the AI Puzzle

“A unified data strategy can significantly reduce the time data scientists spend on accessing, re-formatting, or creating data, thereby improving their effectiveness in developing AI models,” Francis says. Yaad Oren, managing director of SAP Labs US and global head of SAP BTP innovation, explains that incorporating AI across an organization is not possible without trusted and governed data. “A unified data strategy simplifies the data landscape, maintains data context and ensures accurate training of AI models,” he says. This leads to more effective AI deployments and allows customers to harness data to drive deeper insights, faster growth, and more efficiency. “A unified date architecture is crucial for creating a holistic view of business operations and avoiding the ramifications of flawed AI,” he adds. By bringing together disparate data from across the business, a data architecture ensures data context is kept intact, providing a picture of how the data was generated, where it resides, when it was created, and who it relates to. “A strategy that incorporates a data architecture empowers users to access and use data in real time, creating a single source of truth for decision making, and automating data management processes,” Oren explains.


The Next Business Differentiator: 3 Trends Defining The GenAI Market

Different industries have distinct needs and like with cloud, standardized or general GenAI models and services can’t support the specialized requirements of specific industries. This is especially true for regulated industries that have stringent governance, risk and compliance standards — industry or domain-specific GenAI models will help organizations comply with regulations and compliance standards, ensuring data security and ethical considerations are adhered to. ... The main reason for prioritizing responsible AI is to mitigate bias. Mitigating bias is fundamental in delivering GenAI solutions that have true market applicability and relevance. Ultimately, bias comes from three areas; algorithms, data and humans. Bias from AI algorithms has plummeted exponentially in the last decade. Today, algorithms are mostly trustworthy and the biggest source of bias in AI comes from data and humans. When it comes to data, bias exists because of a lack of quality and variety, as well as often incomplete datasets used to train the algorithm. With humans, there is an inherent lack of trust when it comes to AI, whether because of reported threats to people’s livelihoods or due to AI hallucinating certain information.


Miniaturized brain-machine interface processes neural signals in real time

The MiBMI's small size and low power are key features, making the system suitable for implantable applications. Its minimal invasiveness ensures safety and practicality for use in clinical and real-life settings. It is also a fully integrated system, meaning that the recording and processing are done on two extremely small chips with a total area of 8mm2. This is the latest in a new class of low-power BMI devices developed at Mahsa Shoaran's Integrated Neurotechnologies Laboratory (INL) at EPFL's IEM and Neuro X institutes. "MiBMI allows us to convert intricate neural activity into readable text with high accuracy and low power consumption. This advancement brings us closer to practical, implantable solutions that can significantly enhance communication abilities for individuals with severe motor impairments," says Shoaran. Brain-to-text conversion involves decoding neural signals generated when a person imagines writing letters or words. In this process, electrodes implanted in the brain record neural activity associated with the motor actions of handwriting. The MiBMI chipset then processes these signals in real time, translating the brain's intended hand movements into corresponding digital text. 


From Transparency to the Perils of Oversharing

While openness fosters collaboration and trust, oversharing can inadvertently lead to micromanagement, misinterpretation, and a loss of trust, undermining the foundations of a healthy team dynamic. ... Transparency without trust can create a blame culture where team members feel exposed to criticism for every minor mistake. This effect can result in individuals trying to cover their tracks or avoid taking risks, undermining the very principles of Agile. Decision paralysis: When too much transparency leads to stakeholders or managers second-guessing every team decision, it can create decision paralysis. The team may feel that every move is under a microscope, leading them to slow down or become overly cautious, eroding the trust that they can make decisions independently. ... It’s not just the team that needs to manage transparency effectively; stakeholders also need guidance on interpreting the information they receive. Educating stakeholders on Agile practices and the purpose of various metrics can prevent misinterpretation and unnecessary interference. In other words, run workshops for stakeholders on interpreting data and information from your team.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - September 01, 2024

Since cyber risk can’t be eliminated, the question that must be answered is: Can cyber risk at least be managed in a cost-effective manner? The answer is an emphatic yes! ... Identify the sources of cyber risk. These sources can be broken down into various categories. More specifically, there are internal and external threats, as well as potential vulnerabilities that are the basis for cyber risk. Identifying these threats and vulnerabilities is not only a logical place to start the process of managing an organization’s cyber risk, it also will help to frame an approach for addressing an organization’s cyber risk. Estimate the likelihood (i.e., probability) that your organization will experience a cyber breach. Of course, any single point estimate of the probability of a cyber breach is just that—an estimate of one possibility from a probability distribution. Thus, rather than estimating a single probability, a range of probabilities could be considered. Estimate the maximum cost to an organization if a cyber breach occurs. Here again, a point estimate of the maximum cost resulting from a cyber-attack is just that—an estimate of one possible cost. Thus, rather than estimating a single cost, a range of costs could be considered.


How AI is Revolutionizing Prosthetics to Refine Movement

AI prosthetics technology is advancing on several fronts. Researchers at the UK's University of Southampton and Switzerland's EPFL University have, for instance, developed a sensor that allows prosthetic limbs to sense wetness and temperature changes. "This capability helps users adjust their grip on slippery objects, such as wet glasses, enhancing manual dexterity and making the prosthetic feel more like a natural part of their body," Torrang says. Multi-texture surface recognition is another area of important research. Advanced AI algorithms, such as neural networks, can be used to process data from liquid metal sensors embedded in prosthetic hands. "These sensors can distinguish between different textures, enabling users to feel various surfaces," Torrang says. "For example, researchers have developed a system that can accurately detect and differentiate between ten different textures, helping users perform tasks that require precise touch." Natural sensory feedback research is also attracting attention. AI can be used to provide natural sensory feedback through biomimetic stimulation, which mimics the natural signals of the nervous system.


From Chaos to Clarity: CTO’s Guide to Successful Software {Code} Refactoring

As software grows, code can become overly complicated and difficult to understand, making modifications and extensions challenging. Refactoring simplifies and clarifies the code, enhancing its readability and maintainability.Signs of poor performance If software performance degrades or fails to meet efficiency benchmarks, refactoring can optimize it and improve its speed and responsiveness. Migration to newer technologies and libraries When migrating legacy systems to newer technologies and libraries, code refactoring ensures smooth integration and compatibility, preventing potential issues down the line.Frequent bugs Frequent bugs and system crashes often indicate a messy codebase that requires cleanup. If your team spends more time tracking down bugs than developing new features, code refactoring can improve stability and reliability.Onboarding team of new developers Onboarding new developers is another instance where refactoring is beneficial. Standardizing the code base ensures new team members can understand and work with it more effectively.Code issues


How to Win the War Against Bad Master Data

The most immediate chance to make a difference lies within your existing dataset. Take the initiative to compare your supplier and customer master data with reliable external sources, such as government databases, regulatory lists, and other trusted entities, to pinpoint discrepancies and omissions. Consider this approach as a form of “data governance as a service,” as a shortcut to data quality where you can rely on comparison with the authoritative data sources to make sure fields are the right length, in the right format, and, even more important, accurate. This task may require significant effort (unless automated master data validation and enrichment is employed), but it can provide an immediate ROI. Each corrected error and updated entry contributes to greater compliance, lower risk and enhanced operational efficiency within the organization. However, many companies lack a consistent process for cleaning data, and even among those with a process in place, the scope and frequency of data cleansing is often insufficient. The best data quality comes from continuous automated cleansing and enrichment.


3 Ways to Boost Cybersecurity Defenses With Limited Resources

Assume-breach accepts that breaches are inevitable, shifting the focus from preventing all breaches to minimizing the impact of a breach through security measures, protocols and tools that are designed with the assumption that an attacker may have already compromised parts of the network. Paired with the assume-breach mindset, these security measures, protocols and tools focus on protecting data, detecting unusual behavior and responding quickly to potential threats. Just as cars are equipped with seatbelts and airbags to reduce the fallout of a crash, assume-breach encourages organizations to put proactive measures in place to reduce the impact and damage when the worst occurs. ... In the event a cyber attack does occur, having a well-tested and resilient plan in place is key to minimize impacts. As the entire organization participates in these practices and trainings, leaders can focus on implementing assume-breach security measures, protocols and tools. These measures should include enhancing real-time visibility, identifying vulnerabilities, blocking known ransomware points and strategic asset segmentation. 


The Future of LLMs Is in Your Pocket

The first reason is that, due to the cost of GPUs, generative AI has broken the near-zero marginal cost model that SaaS has enjoyed. Today, anything bundling generative AI commands a high seat price simply to make the product economically viable. This detachment from underlying value is consequential for many products that can’t price optimally to maximize revenue. In practice, some products are constrained by a pricing floor (e.g., it is impossible to discount 50% to 10x the volume), and some features can’t be launched because the upsell doesn’t pay for the inference cost ... The second reason is that the user experience with remote models could be better: generative AI enables useful new features, but they often come at the expense of a worse experience. Applications that didn’t depend on an internet connection (e.g., photo editors) now require it. Remote inference introduces additional friction, such as latency. Local models remove the dependency on an internet connection. The third reason has to do with how models handle user data. This plays out in two dimensions. First, serious concerns have been about sharing growing amounts of private information with AI systems.


GenOps: learning from the world of microservices and traditional DevOps

How do the operational requirements of a generative AI application differ from other applications? With traditional applications, the unit of operationalisation is the microservice. A discrete, functional unit of code, packaged up into a container and deployed into a container-native runtime such as kubernetes. For generative AI applications, the comparative unit is the generative AI agent: also a discrete, functional unit of code defined to handle a specific task, but with some additional constituent components that make it more than ‘just’ a microservice ... The Reasoning Loop is essentially the full scope of a microservice, and the model and Tool definitions are its additional powers that make it into something more. Importantly, although the Reasoning Loop logic is just code and therefore deterministic in nature, it is driven by the responses from non-deterministic AI models, and this non-deterministic nature is what provides the need for the Tool, as the agent ‘chooses for itself’ which external service should be used to fulfill a task. A fully deterministic microservice has no need for this ‘cookbook’ of Tools for it to select from: Its calls to external services are pre-determined and hard coded into the Reasoning Loop.


Saudi Arabia strengthening cyber resilience through skills development

Currently, four of the top 10 fastest-growing job roles in Saudi Arabia fall within the fields of cybersecurity, data analysis, and software development. As the demand for such expertise far outstrips supply, the government, industry, and academia must collaborate to develop and expand pathways to nurture talent in this field. Enhanced curricula and specialized programs will help upskill students in data protection, while partnerships with global tech companies can facilitate knowledge transfer and provide access to cutting-edge technologies and methodologies in public and private sector organizations. The Saudi government’s investments in initiatives to enhance digital skills, including a $1.2 billion plan to train 100,000 youths by 2030 in critical fields like digital security, are a crucial step in this direction. Saudi Arabia today outpaces the global average in cybersecurity trends, with 3.1 percent compared to the global average of 2.5 percent. An overwhelming 79 percent of Saudi employees anticipate substantial shifts in their work dynamics due to AI advancements. This is reflected in the rise of new learners in the Kingdom who are building skill proficiencies and acquiring new digital skills to boost their economic mobility.


How Financial Firms Can Build Better Data Strategies

For financial organizations, data strategies are often driven by CISOs and tend to focus on data protection and security. This enables regulatory and operational compliance by ensuring the right people can access the right data at the right time while still aligning to the corporate security and risk stance. But this now comes within the context of the emergence of artificial intelligence and the growing sense that firms need to leverage it to gain a competitive edge. For example, loan data can be made more valuable with AI-driven analytics. Or banks can use AI tools to identify patterns related to fraud or compliance challenges to help them avoid potential regulatory pitfalls. ... New tools often seem like the answer to every data challenge, but the right tools build on what’s already in place. To ensure that new solutions deliver a strategic advantage, leaders must ask simple questions: What’s the business need? Why am I pursuing this approach or tool? This requires an honest assessment of current data conditions, business needs and coworker skill sets. While many businesses are fully compliant and meet regulations, their data may not be highly active or in great shape. Identifying issues lets leaders define business needs.


Achieving digital transformation in fintech one step at a time

The fintech industry continues to experience unprecedented growth year after year. According to the Statista survey, there are more than 3 billion customers in this niche worldwide, and this number is expected to have grown to 4.8 billion by 2028. Advanced financial technologies are gradually replacing traditional services. For instance, one of the latest researches by the American Bankers Association has shown that 71% of users prefer managing their financial affairs online (48% of them — with the help of mobile apps), while only 9% of clients would rather go to a physical bank branch. In addition, more and more people around the world are favoring cryptocurrencies over traditional currencies as payment and investment tools. It is proven by the Forbes statistics, which state that the capitalization of the crypto market has exceeded 2.5 trillion dollars. As is well-known, demand creates supply. Such a keen interest in modern fintech instruments and services from users from all over the world generates a constant growth of supply in this sphere. Every year, plenty of new companies emerge, while the existing ones introduce numerous innovations to keep their regular customers and attract new ones.



Quote for the day:

"True greatness, true leadership, is achieved not by reducing men to one's service but in giving oneself in selfless service to them." -- J. Oswald Sanders