Daily Tech Digest - March 04, 2024

Evolving Landscape of ISO Standards for GenAI

The burgeoning field of Generative AI (GenAI) presents immense potential for innovation and societal benefit. However, navigating this landscape responsibly requires addressing potential concerns regarding its development and application. Recognizing this need, the International Organization for Standardization (ISO) has embarked on the crucial task of establishing a comprehensive set of standards. ... A shared understanding of fundamental terminology is vital in any field. ISO/IEC 22989 serves as the cornerstone by establishing a common language within the AI community. This foundational standard precisely defines key terms like “artificial intelligence,” “machine learning,” and “deep learning,” ensuring clear communication and fostering collaboration and knowledge sharing among stakeholders. ... Similar to the need for blueprints in construction, ISO/IEC 23053 provides a robust framework for AI development. This standard outlines a generic structure for AI systems based on machine learning (ML) technology. This framework serves as a guide for developers, enabling them to adopt a systematic approach to designing and implementing GenAI solutions. 


Your Face For Sale: Anyone Can Legally Gather & Market Your Facial Data

We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising. Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law. It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws. Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices”. There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions. The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. 


Affective Computing: Scientists Connect Human Emotions With AI

Affective computing is a multidisciplinary field integrating computer science, engineering, psychology, neuroscience, and other related disciplines. A new and comprehensive review on affective computing was recently published in the journal Intelligent Computing. It outlines recent advancements, challenges, and future trends. Affective computing enables machines to perceive, recognize, understand, and respond to human emotions. It has various applications across different sectors, such as education, healthcare, business services and the integration of science and art. Emotional intelligence plays a significant role in human-machine interactions, and affective computing has the potential to significantly enhance these interactions. ... Affective computing, a field that combines technology with the nuanced understanding of human emotions, is experiencing surges in innovation and related ethical considerations. Innovations identified in the review include emotion-generation techniques that enhance the naturalness of human-computer interactions by increasing the realism of the facial expressions and body movements of avatars and robots. 


The open source problem

Over the years, I’ve trended toward permissive, Apache-style licensing, asserting that it’s better for community development. But is that true? It’s hard to argue against the broad community that develops Linux, for example, which is governed by the GPL. Because freedom is baked into the software, it’s harder (though not impossible) to fracture that community by forking the project. To me, this feels critical, and it’s one reason I’m revisiting the importance of software freedom (GPL, copyleft), and not merely developer/user freedom (Apache). If nothing else, as tedious as the internecine bickering was in the early debates between free software and open source (GPL versus Apache), that tension was good for software, generally. It gave project maintainers a choice in a way they really don’t have today because copyleft options disappeared when cloud came along and never recovered. Even corporations, those “evil overlords” as some believe, tended to use free and open source licenses in the pre-cloud world because they were useful. Today companies invent new licenses because the Free Software Foundation and OSI have been living in the past while software charged into the future. Individual and corporate developers lost choice along the way.


Researchers create AI worms that can spread from one system to another

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research. ... To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say. To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.


Do You Overthink? How to Avoid Analysis Paralysis in Decision Making

Welcome to the world of analysis paralysis. This phenomenon occurs when an influx of information and options leads to overthinking, creating a deadlock in decision-making. Decision makers, driven by the fear of making the wrong choice or seeking the perfect solution, may find themselves caught in a loop of analysis, reevaluation, and hesitation, consequently losing sight of the overall goal. ... Analysis paralysis impacts decision making by stifling risk taking, preventing open dialogue, and constraining innovation—all of which are essential elements for successful technology development. It often leads to mental exhaustion, reduced concentration, and increased stress from endlessly evaluating information, also known as decision fatigue. The implications of analysis paralysis include missed opportunities due to ongoing hesitation and innovative potential being restricted by cautious decision making. ... In the technology sector, the consequences of poor decisions can be far-reaching, potentially unraveling extensive work and achievements. Fear of this happening is heightened due to the sector’s competitive nature. Teams worry that a single misstep could have a cascading negative impact.


30 years of the CISO role – how things have changed since Steve Katz

Katz had no idea what the CISO job was when he accepted it in 1995. Neither did Citicorp. “They said you’ve got a blank cheque, build something great — whatever the heck it is,” Katz recounted during the 2021 podcast. “The CEO said, ‘The board has no idea, just go do something.’” Citicorp gave Katz just two directives after hiring him: “Build the best cybersecurity department in the world” and “go out and spend time with our top international banking customers to limit the damage.” ... today’s CISO must be able to communicate cyber threats in terms that line of business can understand almost instantly. “It’s the ability to articulate risk in a way that is related to the business processes in the organization,” says Fitzgerald. “You need to be able to translate what risk means. Does it mean I can’t run business operations? Does it mean we won’t be able to treat patients in our hospital because we had a ransomware attack?” Deaner says CISOs have an obvious role to play in core infosec initiatives such as implementing a business continuity plan or disaster recovery testing. ... “People in CISO circles absolutely talk a lot about liability. We’re all concerned about it,” Deaner acknowledges. “People are taking the changes to those regulations very seriously because they’re there for a reason.”


Vishing, Smishing Thrive in Gap in Enterprise, CSP Security Views

There is a significant gap between enterprises’ high expectations that their communications service provider will provide the security needed to protect them against voice and messaging scams and the level of security those CSPs offer, according to telecom and cybersecurity software maker Enea. Bad actors and state-sponsored threat groups, armed with the latest generative AI tools, are rushing to exploit that gap, a trend that is apparent in the skyrocketing numbers of smishing (text-based phishing) and vishing (voice-based frauds) that are hitting enterprises and the jump in all phishing categories since the November 2022 release of the ChatGPT chatbot by OpenAI, according to a report this week by Enea. ... “Maintaining and enhancing mobile network security is a never-ending challenge for CSPs,” the report’s authors wrote. “Mobile networks are constantly evolving – and continually being threatened by a range of threat actors who may have different objectives, but all of whom can exploit vulnerabilities and execute breaches that impact millions of subscribers and enterprises and can be highly costly to remediate.”


Causal AI: AI Confesses Why It Did What It Did

Traditional AI models are fixed in time and understand nothing. Causal AI is a different animal entirely. “Causal AI is dynamic, whereas comparable tools are static. Causal AI represents how an event impacts the world later. Such a model can be queried to find out how things might work,” says Brent Field at Infosys Consulting. “On the other hand, traditional machine learning models build a static representation of what correlates with what. They tend not to work well when the world changes, something statisticians call nonergodicity,” he says. It’s important to grok why this one point of nonergodicity is such a crucial difference to almost everything we do. “Nonergodicity is everywhere. It’s this one reason why money managers generally underperform the S&P 500 index funds. It’s why election polls are often off by many percentage points. ... Without knowing the cause of an event or potential outcome, the knowledge we extract from AI is largely backward facing even when it is forward predicting. Outputs based on historical data and events alone are by nature handicapped and sometimes useless. Causal AI seeks to remedy that.


Leveraging power quality intelligence to drive data center sustainability

The challenge is that some data centers lack the power monitoring capabilities necessary for achieving heightened efficiency and sustainability. Moreover, there needs to be more continuous power quality monitoring. Many rely on rudimentary measurements, such as voltage, current, and power parameters, gathered by intelligent rack power distribution units (PDUs), which are then transmitted to DCIM, BMS, and other infrastructure management and monitoring systems. Some consider power quality only during initial setup or occasionally revisit it when reconfiguring IT setups. This underscores the critical role of intelligent PDUs in delivering robust power quality monitoring and the imperative for data center and facility managers to steer efforts toward increased efficiency and sustainability. Certain power quality issues can have detrimental effects on the electrical reliability of a data center, leading to costly unplanned downtime and posing challenges in enhancing sustainability. ... These power quality issues can profoundly affect a data center's functionality and dependability. They may result in unforeseen downtime, harm to equipment, data loss or corruption, and reduced network efficiency. 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - March 03, 2024

The most popular neural network styles and how they work

Feedforward networks are perhaps the most archetypal neural net. They offer a much higher degree of flexibility than perceptrons but still are fairly simple. The biggest difference in a feedforward network is that it uses more sophisticated activation functions, which usually incorporate more than one layer. The activation function in a feedforward is not just 0/1, or on/off: the nodes output a dynamic variable. ... Recurrent neural networks, or RNNs, are a style of neural network that involve data moving backward among layers. This style of neural network is also known as a cyclical graph. The backward movement opens up a variety of more sophisticated learning techniques, and also makes RNNs more complex than some other neural nets. We can say that RNNs incorporate some form of feedback. ... Convolutional neural networks, or CNNs, are designed for processing grids of data. In particular, that means images. They are used as a component in the learning and loss phase of generative AI models like stable diffusion, and for many image classification tasks. CNNs use matrix filters that act like a window moving across the two-dimensional source data, extracting information in their view and relating them together. 


The startup CIO’s guide to formalizing IT for liquidity events

“You have to stop fixing problems in the data layer, relying on data scientists to cobble together the numbers you need. And if continuing that approach is advocated by the executives you work with, if it’s considered ‘good enough,’ quit,” he says. “Getting the numbers right at the source requires that you straighten out not only the systems that hold the data, all those pipelines of information, but also the processes whereby that data is captured and managed. No tool will ever entirely erase the friction of getting people to enter their data in a CRM.” The second piece to getting the numbers right comes at the end: closing the books. While this process is a near ubiquitous struggle for all growing companies, Hoyt offers two points of optimism. “First,” he explains, “many teams struggle to close the books simply because the company hasn’t invested in the proper tools. They’ve kicked the can down the street. And second, you have a clear metric of improvement: the number of days taken to close.” Hoyt suggests investing in the proper tools and then trying to shave the days-to-close each quarter. Get your numbers right, secure your company, bring it into compliance, and iron out your ops and infrastructure. 


Majority of commercial codebases contain high-risk open-source code

Advocates of open-source software have long argued that many eyes on code lead to fewer bugs and vulnerabilities, and the report doesn’t disprove that assertion, McGuire said. “If anything, the report supports that belief,” he said. “The fact that there are so many disclosed vulnerabilities and CVEs serves as a testament to how active, vigilant, and reactive the open-source community is, especially when it comes to addressing security issues. It’s this very community that is doing the discovery, disclosure, and patching work.” However, users of open-source software aren’t doing a good job of managing it or implementing the fixes and workarounds provided by the open-source community, he said. The primary purpose of the report is to raise awareness about these issues and to help users of open-source software better mitigate the risks, he said. “We would never recommend any software producer avoid using, or tamp down their usage, of open source,” he added. “In fact, we would argue the opposite, as the benefits of open source far outweigh the risks.” Open-source software has accelerated digital transformation and allowed companies to develop innovative applications that consumers want, he said. 


From gatekeeper to guardian: Why CISOs must embrace their inner business superhero

You, the CISO, are no longer just the security guard at the front gate. You're the city planner, the risk management consultant, the chief resilience officer, and the chief of police all rolled into one. You need to understand the flow of traffic, the critical infrastructure, and the potential vulnerabilities lurking in every alleyway. But how do we, the guardians of the digital realm, transform into these business superheroes? Fear not, fellow CISOs, for the path to upskilling and growth is paved with strategic learning, effective communication, and more than a dash of inspirational or motivational leadership. ... As the lone wolf days have ended, so too have the days when technical expertise alone could guarantee a CISO’s success. Today's CISO needs to be a voracious learner, constantly expanding their knowledge and skills. ... Failure to effectively communicate is a career killer for any CXO. To be influential, especially with the C-suite, CISOs must learn to speak in ways understood by their C-suite peers. Imagine how your eyes may glaze over when a CFO starts talking capex, opex, or EBITDA. Realize the same will happen for these cybersecurity “outsiders.”


Looking good, feeling safe – data center security by design

For data centers in shared spaces, sometimes turning data halls into display features is a way to make them secure. Keeping compute in a secure but openly visible space means it’s harder to do anything unnoticed. It may also help some engineers be more mindful about keeping the halls tidy and cabling neat. “Some people keep data centers behind closed walls and keep them hidden and private. Others use them as features,” says Nick Ewing, managing director at UK modular data center provider EfficiencyIT. “The best ones are the ones where the customers like to make a feature of the environment and use it to use it as a bit of a display.” An example he cites is the Wellcome Sanger Institute in Cambridge, where they have four data center quadrants. Each quadrant is about 100 racks; they have man traps at either end of the data center corridor. But one end of the main quadrant is full of glass. “They have an LED display, which is talking about how many cores of compute, how much storage they’ve got, how many genomic sequences they've they've sequenced that day,” he says. “They've used it as a feature and used it to their advantage.”


Neuromorphic computing: The future of IoT

The adoption of neuromorphic computing in IoT promises many benefits, ranging from enhanced processing power and energy efficiency to increased reliability and adaptability. Here are some key advantages: More Powerful AI: Neuromorphic chips enable IoT devices to handle complex tasks with unprecedented speed and efficiency. By collocating memory and processing and leveraging parallel processing capabilities, these chips overcome the limitations of traditional architectures, resulting in near-real-time decision-making and enhanced cognitive abilities. Lower Power Consumption: One of the most significant advantages of neuromorphic computing is its energy efficiency. By adopting an event-driven approach and utilizing components like memristors, neuromorphic systems minimize energy consumption while maximizing performance, making them ideal for power-constrained IoT environments. Extensive Edge Networks: With the proliferation of edge computing, there is a growing need for IoT devices that can process data locally in real-time. Neuromorphic computing addresses this need by providing the processing power and adaptability required to run advanced applications at the edge, reducing reliance on centralized servers and improving overall system responsiveness.


Decentralizing the AR Cloud: Blockchain's Role in Safeguarding User Privacy

For devices to interpret the world, their camera needs access to have some kind of digital counterpart that it can cross reference. And that digital counterpart of the world is much too complex to fit inside one device. Therefore, the AR cloud has been developed. The AR cloud is a network of computers that work to help devices understand the physical world. ... The AR cloud is akin to an API to the world. The implications for applications that require knowledge about location, context, and more are considerable. In AR, the data is intimate data about where we are, who we are with, what we’re saying, looking at, and even what our living quarters look like. AR devices can read our facial expressions, and more, similar to how the Apple Watch can measure the heart rates of its wearers. Digital service providers will have access to a bevy of information and also insight into our thinking, wants, needs, and desires. Storing that data in a centralized server that is opaque is cause for concern. Blockchain allows people to take that same intimate private data, and put it on their own server from which they could access the wondrous world of AR minus such egregious privacy concerns. 


Five ways AI is helping to reduce supply chain attacks on DevOps teams

Attackers are using AI to penetrate an endpoint to steal as many forms of privileged access credentials as they can find, then use those credentials to attack other endpoints and move throughout a network. Closing the gaps between identities and endpoints is a great use case for AI. A parallel development is also gaining momentum across the leading extended detection and response (XDR) providers. CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the company’s annual Fal.Con event last year, “One of the areas that we’ve really pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections. We’re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection.” Leading XDR platform providers include Broadcom, Cisco, CrowdStrike, Fortinet, Microsoft, Palo Alto Networks, SentinelOne, Sophos, TEHTRIS, Trend Micro and VMWare. Enhancing LLMs with telemetry and human-annotated data defines the future of endpoint security.


Blockchain transparency is a bug

Transparency isn’t a feature of decentralization that is truly needed to perform on-chain transactions securely — it’s a bug that forces Web3 users to expose their most sensitive financial data to anyone who wants to see it. Several blockchain marketing tools have emerged over the past few years, allowing marketers and salespeople to use the freely flowing on-chain data for user insights and targeted advertising. But this time, it’s not just behavioral data that is analyzed. Now, your most sensitive financial information is also added to the mix. Web3 will never become mainstream unless we manage to solve this transparency problem. Blockchain and Web3 were an escape from centralized power, making information transparent so that centralized entities cannot own one’s data. Then 2020 came, Web3 and NFTs boomed, and many started talking about how free flowing, available-to-all data is a clear improvement from your data being “stolen” by big data companies as a customer. Some may think if everyone can see the data, transparency will empower users to take ownership of and profit from their own data. Yet, transparency does not mean data can’t be appropriated nor that users are really in control.


Key Considerations to Effectively Secure Your CI/CD Pipeline

Effective security in a CI/CD pipeline begins with the definition of clear and project-specific security policies. These policies should be tailored to the unique requirements and risks associated with each project. Whether it's compliance standards, data protection regulations, or industry-specific security measures (e.g., PCI DSS, HDS, FedRamp), organizations need to define and enforce policies that align with their security objectives. Once security policies are defined, automation plays a crucial role in their enforcement. Automated tools can scan code, infrastructure configurations, and deployment artifacts to ensure compliance with established security policies. This automation not only accelerates the security validation process but also reduces the likelihood of human error, ensuring consistent and reliable enforcement. In the DevSecOps paradigm, the integration of security gates within the CI/CD pipeline is pivotal to ensuring that security measures are an inherent part of the software development lifecycle. If you set up security scans or controls that users can bypass, those methods become totally useless — you want them to become mandatory.



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - March 02, 2024

Rust on the Rise: New Advocacy Expected to Advance Adoption

Recent advocacy and research efforts from agencies like the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), National Institute of Standards and Technology (NIST), and ONCD “can serve as valuable evidence of the considerable risk memory-safety vulnerabilities pose to our digital ecosystem,” the Rust Foundation‘s Executive Director & CEO, Rebecca Rumbul, told The New Stack. Moreover, Rumbul said The Rust Foundation believes that the Rust programming language is the most powerful tool available to address critical infrastructure security gaps. “As an organization, we are steadfast in our commitment to further strengthening the security of Rust through programs like our Security Initiative,” she said. Meanwhile, looking specifically at software development for space systems, the ONCD report says: both memory-safe and memory-unsafe programming languages meet the organization’s requirements for developing space systems. “At this time, the most widely used languages that meet all three properties are C and C++, which are not memory-safe programming languages, the report said.


The Power of Hyperautomation in Banking

Hyperautomation improves the operational efficiency within banks significantly as it helps in automating routine processes, that include document processing, transaction reconciliations, data entry, decreasing the requirement for manual intervention. Therefore, this not only augments processes but it also reduces errors, leading to a more reliable as well as cost-effective operation. Banks can use hyperautomation to offer personalized, 24/7 services to their customers. Chatbots & virtual assistants powered by Artificial Intelligence can respond to inquiries as well as perform transactions around the clock. Faster response times coupled with the ability for tailoring services to separate customer requirements leading to enhanced customer satisfaction as well as loyalty. “Hyperautomation facilitates organizations to improve customer experience by reducing the friction in user self-service applications and streamlining broken onboarding processes. It enables faster support and sales query resolution through relevant integrations, AI/ML, and assistive technologies,” says Arvind Jha, Former General Manager – Product Management and Marketing, Newgen Software.


What Is Data Completeness and Why Is It Important?

Data completeness is an important aspect of Data Quality. Data Quality is a reference to how accurate and reliable the data is overall. Data completeness specifically focuses on missing data or how complete the data is, rather than concerns of inaccurate or duplicated data. A lack of data completeness is normally the result of information that was never collected. For example, if a customer’s name and email address are supposed to be collected, but the email address is missing, it is difficult to communicate with the customer. ... Missing chunks of information restrict or bias the decision-making process. Attempting to perform analytics with incomplete data can produce blind spots and biases, and result in missed opportunities. Currently, business leaders use data analytics to make decisions that range from marketing to investment strategies to medical diagnostics. In some situations, data missing key pieces of information is still used, which can lead to dangerous mistakes and false conclusions. Assessing and improving data completeness should be done before performing analytics.


A socio-technical approach to data management is crucial in our decentralised world

To improve the odds of successfully building an effective data management strategy, working with a trusted and experienced data partner to help shift the organisation’s data culture is a crucial - and often missing - step. The Data and Analytics Leadership Annual Executive Survey 2023 found that cultural factors are the biggest obstacle to delivering value from data investments. Data fabrics, meshes and modern data stacks will continue to consolidate an increasingly decentralised world by making the management of data easier. However, to ensure control over security and governance, and to extract value from data that is trustworthy requires a tactical shift to what we call a socio-technical approach. In other words, any strategy must be made up of an investment in people, process and technology to be successful. This is because data management involves more than the technical aspects of data storage, processing and analysis. It also includes the social aspects of data governance, change management, data quality management, user upskilling and collaboration between different teams. Organisations that know how to use technology the best will have an edge over their competitors.


Blockchain is one step away from mainstream adoption

Blockchain’s growth is already reshaping traditional business processes and models. In the financial sector, blockchain facilitates faster and more secure transactions. Supply chain management benefits from increased transparency and traceability, ensuring the authenticity and integrity of products. Smart contracts automate and streamline complex agreements, minimizing the risk of fraud and error. And in addition to sparking rising trading volumes, the SEC’s approval of spot bitcoin ETFs sent a global signal of validation to governments reviewing the viability of blockchain applications in both the private and public sectors. Importantly, the evolution of blockchain has given credence to — and bestowed practicality upon — the concept of decentralized finance (DeFi). We’re already in a reality where traditional financial services are replicated, and even improved, using blockchain technology. This is transformative because it will eliminate the need for intermediaries, opening the door to financial participation for virtually anyone with internet access. This democratization of finance has the potential to provide financial services to underserved populations and redefine the global financial landscape.


Biometrics Regulation Heats Up, Portending Compliance Headaches

What this all means is that it will be complicated for companies doing business nationally because they will have to audit their data protection procedures and understand how they obtain consumer consent or allow consumers to restrict the use of such data and make sure they match the different subtleties in the regulations. Contributing to the compliance headaches: The executive order sets high goals for various federal agencies in how to regulate biometric information, but there could be confusion in terms of how these regulations are interpreted by businesses. For example, does a hospital's use of biometrics fall under rules from the Food and Drug Administration, Health and Human Services, the Cybersecurity and Infrastructure Security Agency, or the Justice Department? Probably all four. ... Meanwhile, AI-induced deepfake video impersonations by criminals that abuse biometric data like face scans are on the rise. Earlier this year, a deepfake attack in Hong Kong was used to steal more than $25 million, and there are certainly others who will follow as AI technology gets better and easier to use for producing biometric fakes. The conflicting regulations and criminal abuses could explain why consumer confidence in biometrics has taken a nosedive.


The Role of Data in Crafting Personalized Customer Journeys

Through comprehensive customer profiles, data is sourced from multiple touchpoints in silos such as online visitors, purchases done, forms, customer support units, social media engagement, mobile app usage, and other channels as recognized in the CRM system. This further facilitates real-time data processing and identifies customer behaviors and preferences. As briefly discussed previously, predictive analytics consumes historical customer data and powers forecasting of expected behaviors and preferences. This segments data based on different parameters such as demographics, behaviors, preferences, etc. Ultimately, it acts as the seed for planting responsive marketing campaigns. While we are at it, an important strategy is cross-channel integration. Given the scale of marketing landscape, it is important to consider all channels and systems. So, the data collected from multiple sources is then integrated and analyzed through data management platforms to create a cross-channel, unified 360 view. Such interoperability delivers an omnichannel experience, thereby increasing their lifetime value. To ensure better customer loyalty, implement practices in alignment with the regulations. 


Checkout Lessons: What Banks Need to Borrow from eCommerce

eCommerce has much to teach the financial and healthcare industries, which also experience high seasonality and peak traffic periods. Events like 401(k) sign-ups, healthcare enrollments, and tax days are notorious for bringing down systems. In my experience, performance is synonymous with user experience. ... Many digital-first banks don’t operate physical branches. Their success is due to a singular focus on user experience, performance, speed, flexibility, and a mobile-first approach. This is what has won over the current generation of young people who do not need to visit a teller. It’s crucial for banks to recognize the importance of these advancements and to take action. Otherwise, they risk losing their competitive edge. In the U.S., some banks perform exceptionally well with only an online presence, with USAA as a prime example. Some companies, like Capital One, are innovating by transforming their banks into cafés. They provide WiFi, allowing customers to work and do more than just banking. This shift dramatically enhances the user experience.


Fintech at its Finest: Adding Value with Innovation

The best fintech platforms are constantly listening to their customers. Whether that’s through harnessing the power of AI to create an optimal user experience or continuously innovating based on customer feedback, a good fintech is creating exactly what its customers want and need. ... The best fintech platforms have innovative technologies at their core and are increasingly harnessing AI and machine learning to enhance their services. But crucially, they are also designed to be intuitive for users. After all, businesses have just 10 minutes to set up digital accounts or risk losing consumer trust. Millennials and Gen Z make up a significant part of fintech’s core market, so it’s providers who can cater to tech-savvy generations and prioritise smooth customer experiences that will differentiate themselves in an increasingly crowded market. ... In the bustling world of fintech, the top platforms set themselves apart by cleverly blending practices to ensure they keep growing and succeed – even when faced with challenges. These platforms develop excellent solutions, using technologies like blockchain, AI and fancy data analytics to tackle old financial problems and improve user experiences. 


Enabling Developers To Become (More) Creative

What influence does collaboration have on creativity? Now we are starting to firmly tread into management territory! Since software engineering happens in teams, the question becomes how to build a great team that's greater than the sum of its parts. There are more than just a few factors that influence the making of so-called "dream teams". We could use the term "collective creativity" since, without a collective, the creativity of each genius would not reach as far. The creative power of the individual is more negligible than we dare to admit. We should not aim to recruit the lone creative genius, but instead try to build collectives of heterogeneous groups with different opinions that manage to push creativity to its limits. ... Managers can start taking simple actions towards that grand goal. For instance, by helping facilitate decision-making, as once communication goes awry in teams, the creative flow is severely impeded. Researcher Damian Tamburri calls this problem "social debt." Just like technical debt, when there's a lot of social debt, don't expect anything creative to happen. Managers should act as community shepherds to help reduce that debt.



Quote for the day:

"A real entrepreneur is somebody who has no safety net underneath them." -- Henry Kravis

Daily Tech Digest - March 01, 2024

Why Large Language Models Won’t Replace Human Coders

Are any of these GenAI tools likely to become substitutes for real programmers? Unless the accuracy of coding answers supplied by models increases to within an acceptable margin of error (i.e 98-100%), then probably not. Let’s assume for argument’s sake, though, that GenAI does reach this margin of error. Does that mean the role of software engineering will shift so that you simply review and verify AI-generated code instead of writing it? Such a hypothesis could prove faulty if the four-eyes principle is anything to go by. It’s one of the most important mechanisms of internal risk control, mandating that any activity of material risk (like shipping software) be reviewed and double-checked by a second, independent, and competent individual. Unless AI is reclassified as an independent and competent lifeform, then it shouldn’t qualify as one pair of eyes in that equation anytime soon. If there’s a future where GenAI becomes capable of end-to-end development and building Human-Machine Interfaces, it’s not in the near future. LLMs can do an adequate job of interacting with text and elements of an image. There are even tools that can convert web designs into frontend code.


The future of farming

SmaXtec’s solution requires cows to swallow what the company calls a “bolus” - a small device that consists of sensors to measure a cow’s pH and temperature, an accelerometer, and a small processor. “It sits inside the cow and constantly measures very important body health parameters, including temperature, the amount of water intake, the drinking volume, the activity of the animal, and the contraction of the rumen in the dairy cow,” Scherer said. Rumination is a process of regurgitation and re-digestion. “You could almost envision this as a Fitbit for cows,” he said, adding that by constantly measuring those parameters at a high density - short timeframes with high robustness and high accuracy - SmaXtec can make assessments about potential diseases that are about to break out. ... Small Robot Company is known for its Tom robot. Tom - the robot - distantly recalls memories of Doctor Who’s dog K9. The device wheels itself up and down fields, capturing images and mapping out the land. The data is then taken from Tom’s SSD and uploaded to the cloud, where an AI identifies the different plants and weeds, and provides a customized fertilizer and herbicide plan for the crops.


The CISO: 2024’s Most Important C-Suite Officer

Short- and long-term solutions to navigating increased regulatory and plaintiff bar scrutiny start with the CISO. Cybersecurity defense strategies, implementation and monitoring fall under the purview of the CISO, who must closely coordinate with other members of the C-suite as well as boards of directors. Recent lawsuits highlight individual fiduciary liability for cybersecurity controls and accurate disclosures. Individual liability demands increased knowledge of, participation in and shared ownership of cybersecurity defense decisions. Gone are the days when liability risks could be eliminated by placing the blame on a single security officer. Boards and other C-suite executives now have personal risks over company cybersecurity defenses and preparedness. CISOs carry primary ownership for formulating and maintaining robust cybersecurity defenses and preparedness. This starts with implementing secure by design and other leading security frameworks. It extends to effective real-time threat monitoring and continual technology assessment of company capabilities to defend against advanced cyber threats or the “Defining Threat of Our Time.”


Generative AI and the big buzz about small language models

LLMs can create a wide array of content from text and images to audio and video, with multimodal systems emerging to handle more than one of the above tasks. They process massive amounts of information to execute natural language processing (NLP) tasks that approximate human speech in response to prompts. As such, they are ideal for pulling from vast amounts of data to generate a wide range of content, as well as conversational AI tasks. This requires a significant number of servers, storage and the all-too-scarce GPUs that power the models — at a cost some organizations are unwilling or unable to bear. It’s also tough to satisfy ESG requirements when LLMs hog compute resources for training, augmenting, fine-tuning and other tasks organizations require to hone their models. In contrast, SLMs consume fewer computing resources than their larger brethren and provide surprisingly good performance — in some cases on par with LLMs depending on certain benchmarks. They’re also more customizable, allowing organizations to execute specific tasks. For instance, SLMs may be trained on curated data sets and run through retrieval-augmented generation (RAG) that help refine search. For many organizations, SLMs may be ideal for running models on premises.


Captive centers are back. Is DIY offshoring right for you?

Captive centers are no longer just means of value creation, providing cost savings and driving process standardization. They are driving organization-wide innovation, facilitating digital transformations, and contributing to revenue growth. Unlike earlier generations of what are increasingly being called “global capabilities centers,” which tended to be large operations set up by multinationals, more than half of last year’s new centers were launched by first-time adopters — and on the smaller side, with less than 250 full-time employees; in some cases, less than 50. The desire to build internal IT capabilities amid a tight talent market is at the heart of the trend. As companies have grown comfortable with offshore and nearshore delivery, the captive model offers the opportunity to tap larger populations of lower-cost talent without handing the reins to a third party. “Eroding customer satisfaction with outsourcing relationships — per some reports, at an all-time low — has caused some companies to opt to ‘do it themselves,’” says Dave Borowski, senior partner, operations excellence, at West Monroe. What’s more, establishing up a captive center no longer needs to be entirely DIY. 


Questioning cloud’s environmental impact

Contrary to popular belief, cloud computing is not inherently green. Cloud data centers require a lot of energy to power and maintain their infrastructure. That should be news to nobody. Cloud is becoming the largest user of data center space, perhaps only to be challenged by the growth of AI data centers, which are becoming a developer’s dream. But wait, don’t cloud providers use solar and wind? Although some use renewable energy, not all adopt energy-efficient practices. Many cloud services rely on coal-fired power. Ask cloud providers which data centers use renewable. Most will provide a non-answer, saying their power types are complex and ever-changing. I’m not going too far out on a limb in stating that most use nonrenewable power and will do so for the foreseeable future. The carbon emissions from cloud computing largely stem from the power consumed by the providers’ platforms and the inefficiencies embedded within applications running on these platforms. The cloud provider itself may do an excellent job in building a multitenant system that can provide good optimization for the servers they run, but they don’t have control over how well their customers leverage these resources.


Revolutionizing Real-Time Data Processing: The Dawn of Edge AI

For effective edge computing, efficient and computationally cost-effective technology is needed. One promising option is reservoir computing, a computational method designed for processing signals that are recorded over time. It can transform these signals into complex patterns using reservoirs that respond nonlinearly to them. In particular, physical reservoirs, which use the dynamics of physical systems, are both computationally cost-effective and efficient. However, their ability to process signals in real time is limited by the natural relaxation time of the physical system. This limits real-time processing and requires adjustments for best learning performance. ... Recently, Professor Kentaro Kinoshita, and Mr. Yutaro Yamazaki developed an optical device with features that support physical reservoir computing and allow real-time signal processing across a broad range of timescales within a single device. Speaking of their motivation for the study, Prof. Kinoshita explains: “The devices developed in this research will enable a single device to process time-series signals with various timescales generated in our living environment in real-time. In particular, we hope to realize an AI device to utilize in the edge domain.”


Agile software promises efficiency. It requires a cultural shift to get right

The end result of these fake agile practices is lip service and ceremonies at the expense of the original manifesto’s principles, Bacon said. ... To get agile right, Wickham recommended building on situations in your organization where agile is practiced relatively effectively. Most often, that involves teams building internal tools, such as administrative panels for customer support or CI/CD pipelines. Those use cases have more tolerance for “let’s put something up, ask for feedback, iterate, repeat,” he said. After all, internal customers expect to accept seeing something that’s initially imperfect. “This indicates to me that people comprehend agile and have at least a baseline understanding of how to use it, but a lack of willingness to use it as defined when it comes to external customers,” said Wickham. ... “Agile is an easy term to toss around as a ‘solution,’” Richmond said. “But effective agile does not have a cookie-cutter solution to improving execution.” Getting it right requires a focus on what has to happen to understand the company’s challenges, how those challenges manifest out of the business environment, in what way those challenges impact business outcomes, and then, finally, identifying how to apply agile concepts to the business.


Building a Strong Data Culture: A Strategic Imperative

Effective executive backing is crucial for prioritizing and financing data initiatives that help cultivate an organization’s data-centric culture. Initiatives such as data literacy programs equip employees with vital data skills that are fundamental to fostering such a culture. Nonetheless, these programs often fail to thrive without the robust support of leadership. Results from the same Alation research show that only 15 percent of companies with moderate or weak data leadership integrate data literacy across most departments or throughout the entire organization. This is in stark contrast to the 61 percent adoption rate in companies with strong data leadership. Moreover, strong data leadership involves more than just endorsement; it requires executives to actively engage and set an example in data culture initiatives. For instance, when an executive carves out time from her hectic schedule to partake in data literacy training, it conveys a much more powerful message to her team than if she were to simply instruct others to prioritize such training. This hands-on approach by leaders underscores the importance of data literacy and demonstrates their commitment to embedding a data-driven culture in the organization.


Cybercriminals harness AI for new era of malware development

Threat actors have already shown how AI can help them develop malware only with a limited knowledge of programming languages, brainstorm new TTPs, compose convincing text to be used in social engineering attacks, and also increase their operational productivity. Large language models such as ChatGPT remain in widespread use, and Group-IB analysts have observed continued interest on underground forums in ChatGPT jailbreaking and specialized generative pre-trained transformer (GPT) development, looking for ways to bypass ChatGPT’s security controls. Group-IB experts have also noticed how, since mid-2023, four ChatGPT-style tools have been developed for the purpose of assisting cybercriminal activity: WolfGPT, DarkBARD, FraudGPT, and WormGPT – all with different functionalities. FraudGPT and WormGPT are highly discussed tools on underground forums and Telegram channels, tailored for social engineering and phishing. Conversely, tools like WolfGPT, focusing on code or exploits, are less popular due to training complexities and usability issues. Yet, their advancement poses risks for sophisticated attacks.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - February 29, 2024

Why governance, risk, and compliance must be integrated with cybersecurity

Incorporating cybersecurity practices into a GRC framework means connected teams and integrated technical controls for the University of Phoenix, where GRC and cybersecurity sit within the same team, according to Larry Schwarberg, the VP of information security. At the university, the cybersecurity risk management framework is primarily created out of a consolidated view of NIST 800-171 and ISO 27001 standards, with this being used to guide other elements of its overall posture. “The results of the risk management framework feed other areas of compliance from external and internal auditors,” Schwarberg says. The cybersecurity team works closely with legal and ethics, compliance and data privacy, internal audit and enterprise risk functions to assess overall compliance with in-scope regulatory requirements. “Since our cybersecurity and GRC roles are combined, they complement each other and the roles focus on evaluating and implementing security controls based on risk appetite for the organization,” Schwarberg says. The role of leadership is to provide awareness, communication, and oversight to teams to ensure controls have been implemented and are effective. 


India's talent crunch: Why choose build approach over buying?

The primary challenge is the need for more workers equipped with digital skill sets. Despite the high demand for these skills, the current workforce needs to gain the requisite abilities, especially considering the constant evolution of technology. The lack of niche skill sets essential for working with advanced technologies like AI, blockchain, cloud, and data science further contributes to this gap. The turning point, however, is now within reach as businesses and professionals recognise the crucial need for upskilling and reskilling. At DXC India, we have embraced a strategy that prioritises internal talent development, favouring the 'build' approach over the 'buy' strategy. By upskilling our existing workforce with relevant, in-demand skills, we address our talent needs and foster individual career growth. This method is particularly effective as experienced employees can swiftly acquire new skills and undergo cross-training. This agility is an asset in navigating the rapidly evolving business landscape, benefiting employees and customers. Identifying the specific talent required and subsequently building that talent pool forms the crux of this strategy.


Why does AI have to be nice? Researchers propose ‘Antagonistic AI’

“There was always something that felt off about the tone, behavior and ‘human values’ embedded into AI — something that felt deeply ingenuine and out of touch with our real-life experiences,” Alice Cai, co-founder of Harvard’s Augmentation Lab and researcher at the MIT Center for Collective Intelligence, told VentureBeat. She added: “We came into this project with a sense that antagonistic interactions with technology could really help people — through challenging [them], training resilience, providing catharsis.” But it also comes from an innate human characteristic that avoids discomfort, animosity, disagreement and hostility. Yet antagonism is critical; it is even what Cai calls a “force of nature.” So, the question is not “why antagonism?,” but rather “why do we as a culture fear antagonism and instead desire cosmetic social harmony?,” she posited. Essayist and statistician Nassim Nicholas Taleb, for one, presents the notion of the “antifragile,” which argues that we need challenge and context to survive and thrive as humans. “We aren’t simply resistant; we actually grow from adversity,” Arawjo told VentureBeat.


How companies can build consumer trust in an age of privacy concerns

Aside from reworking the way they interact with customers and their data, businesses should also tackle the question of personal data and privacy with a different mindset – that of holistic identity management. Instead of companies holding all the data, holistic identity management offers the opportunity to “flip the script” and put the power back in the hands of consumers. Customers can pick and choose what to share with businesses, which helps build greater trust. ... Greater privacy and greater personalization may seem to be at odds, but they can go hand in hand. Rethinking their approach to data collection and leveraging new methods of authentication and identity management can help businesses create this flywheel of trust with customers. This will be all the more important with the rise of AI. “It’s never been cheaper or easier to store data, and AI is incredibly good at going through vast amounts of data and identifying patterns of aspects that actual humans wouldn’t even be able to see,” Gore explains. “If you take that combination of data that never dies and the AI that can see everything, that’s when you can see that it’s quite easy to misuse AI for bad purposes. ...”


Testing Event-Driven Architectures with Signadot

With synchronous architectures, context propagation is a given, supported by multiple libraries across multiple languages and even standardized by the OpenTelemetry project. There are also several service mesh solutions, including Istio and Linkerd, that handle this type of routing perfectly. But with asynchronous architectures, context propagation is not as well defined, and service mesh solutions simply do not apply — at least, not now: They operate at the request or connection level, but not at a message level. ... One of the key primitives within the Signadot Operator is the routing key, an opaque value assigned by the Signadot Service to each sandbox and route group that’s used to route requests within the system. Asynchronous applications also need to propagate routing keys within the message headers and use them to determine the workload version responsible for processing a message. ... This is where Signadot’s request isolation capability really shows its utility: This isn’t easily simulated with a unit test or stub, and duplicating an entire Kafka queue and Redis cache for each testing environment can create unacceptable overhead. 


The 7 Rs of Cloud Migration Strategy: A Comprehensive Overview

With the seven Rs as your compass, it’s time to chart your course through the inevitable challenges that arise on any AWS migration journey. By anticipating these roadblocks and proactively addressing them, you can ensure a smoother and more successful transition to the cloud. ... Navigating the vast and ever-evolving AWS ecosystem can be daunting, especially for organizations with limited cloud experience. This complexity, coupled with a potential skill gap in your team, can lead to inefficient resource utilization, suboptimal architecture choices, and delayed timelines. ... Migrating sensitive data and applications to the cloud requires meticulous attention to security protocols and compliance regulations. Failure to secure your assets can lead to data breaches, reputational damage, and hefty fines. ... While leveraging the full range of AWS services can offer significant benefits, over-reliance on proprietary solutions can create an unhealthy dependence on a single vendor. This can limit your future flexibility and potentially increase costs. ... While AWS offers flexible pricing models and optimization tools, managing cloud costs effectively requires ongoing monitoring and proactive adjustments.


What is a chief data officer? A leader who creates business value from data

The chief data officer (CDO) is a senior executive responsible for the utilization and governance of data across the organization. While the chief data officer title is often shortened to CDO, the role shouldn’t be confused with chief digital officer, which is also frequently referred to as CDO. ... Although some CIOs and CTOs find CDOs encroach on their turf, Carruthers says the boundaries are distinct. CDOs are responsible for areas such as data quality, data governance, master data management, information strategy, data science, and business analytics, while CIOs and CTOs manage and implement information and computer technologies, and manage technical operations, respectively. ... The chief data officer is responsible for the fluid that goes in the bucket and comes out; that it goes to the right place, and that it’s the right quality and right fluid to start with. Neither the bucket nor the water work without each other. ... Gomis says he’s seen chief data officers come from marketing backgrounds, and that some are MBAs who’ve never worked in data analytics before. “Most of them have failed, but the companies that hired them felt that the influencer skillset was more important than the data analytics skillset,” he says.


The UK must become intentional about data centers to meet its digital ambitions

For the UK to maintain its leadership position in DC’s, it’s not enough to just leave it to chance. A number of trends are now deciding investment flows both within the UK and on the global stage. First, land and power availability. Access to land and power is becoming increasingly constrained in London and surrounding areas. For example, properties in Slough have gone up by 44 percent since 2019, and the Greater London Authority has told some developers there won’t be electrical capacity to build in certain areas of the city until 2035. Data centers use large quantities of electricity, the equivalent of towns or small cities, in some cases, to power servers and ensure resilience in service. In West London, Distribution Network Operators have started to raise concerns about the availability of powerful grid supply points to meet the rapid influx of requests from data center operators wanting to co-locate adjacent to fiber optic cables that pass along the M4 corridor, and then cross the Atlantic. In response to these power and space concerns, the hyperscalers have already started to favor countries in Scandinavia. 


Rubrik CIO on GenAI’s Looming Technical Debt

This is a case of, “Hey, there’s a leak in the boat, and what are you going to do about it? Are you going to let things get drowned? Or are you going to make sure that there is an equal amount of water that leaves the boat?” So, you have to apply that thinking to your annual plan. Typically, I’ll say that there’s going to be a percentage of resources, budget, and effort I’m going to put into reducing tech debt … And that’s where you start competing with other business initiatives. You will have a bunch of business stakeholders that might look at that as something that should just be kicked down the road because they want to use that funding for something else. That’s where, I believe, educating a lot of my business leaders on what that does to the organization. When I don’t address that tech debt, on a regular basis, production SLAs start to deteriorate. ... There’s going to be some consolidation and some standardization across the board. So, the first couple of years are going to be rocky very everybody. But that doesn’t scare us, because we’re going to put a more robust governance on top of this new area. We need to have a lot more debates about this internally and say, “Let’s be cautious, guys. Because this is coming from all sides.”


How organizations can navigate identity security risks in 2024

IT, identity, cloud security and SecOps teams need to collaborate around a set of security and lifecycle management processes to support business objectives around security, timely access delivery and operational efficiency. These processes are best optimized by automating manual tasks, while ensuring that the ownership and accountability for manual tasks is well understood. In addition, quantifying and tracking business outcomes in terms of metrics highlights IAM’s effectiveness and identifies areas that need improvement or more automation. Utilizing IAM for cloud and Software as a Service (SaaS) applications introduces a spectrum of challenges, rooted in silos of identity. Each system or application has its own identity model and its own concept of various identity settings and permissions: accounts, credentials, groups, roles, entitlements and other access policies. Misconfigured permissions and settings heighten the likelihood of data breaches. To address these complexities, organizations need business users and security teams to collaborate on an identity management and governance framework and overarching processes for policy-based authentication, SSO, lifecycle management, security and compliance. Automation can streamline these processes and help ensure effective access controls.



Quote for the day:

“People may hear your voice, but they feel your attitude.” -- John Maxwell