Daily Tech Digest - November 13, 2023

Navigating the Crossroads of Data Confidentiality and AI

Striking a balance between ensuring data privacy and maximizing the effectiveness of AI models can be quite complex. The more data we utilize for training AI systems, the more accurate and powerful they become. However, this practice often clashes with the need to safeguard privacy rights. Techniques like federated learning offer a solution by allowing AI models to be trained on data sources without sharing raw information. For the uninitiated, Federated Learning leverages the power of edge computing to train local models. These models use data that never leaves the private environment (like your phone, IoT devices, corporate terminals, etc.). Once the local models are trained, they are then leveraged to build a centralized model that can be used for related use cases. ... Due to the recent acceleration in the adoption of AI, government regulations play a pivotal role in shaping the future of AI and data confidentiality. Legislators are increasingly recognizing the significance of data privacy and are implementing laws such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA).


CISOs vs. developers: A battle over security priorities

“Developers and CISOs juggle numerous security priorities, often conflicting across organizations,” noted Luke Shoberg, Global CISO at Sequoia Capital. “The report emphasizes the need for internal assessments, fostering deeper collaboration, and building trust among teams managing this critical domain. Recognizing technical and cultural obstacles, organizations have made significant strides in understanding the importance of securing the software supply chain for sustained business success.” “The world of software consumption and security has radically changed. From containers to the explosion of open source components, every motion has been toward empowering developers to build faster and better,” said Avon Puri, Global Chief Digital Officer at Sequoia Capital. “But with that progress, the security paradigm has been challenged to refocus on better controls and guarantees for the provenance of where software artifacts come from and that their integrity is being maintained. The survey shows developers and security teams are wrestling with this new reality in the wake of major exploits like Log4j and SolarWinds.


Deception technology use to grow in 2024 and proliferate in 2025

It's worth mentioning that all scanning, data collection, processing, and analysis will be continuous to keep up with changes to the hybrid IT environment, security defenses, and the threat landscape. When organizations implement a new SaaS service, deploy a production application, or make changes to their infrastructure, the deception engine notes these changes and adjusts its deception techniques accordingly. Unlike traditional honeypots, burgeoning deception technologies won't require cutting-edge knowledge or complex setup. While some advanced organizations may customize their deception networks, many firms will opt for default settings. In most cases, basic configurations will sufficiently confound adversaries. Remember, too, that deception elements like decoys and lures remain invisible to legitimate users. Therefore, when someone goes poking at a breadcrumb or canary token, you are guaranteed that they are up to no good. In this way, deception technology can also help organizations improve security operations around threat detection and response.


What Role Will Open-Source Hardware Play in Future Designs?

The extent of open-source hardware’s impact on electronics design is still uncertain. While it could likely lead to all these benefits, it also faces several challenges to mainstream adoption. The most significant of these is the volatility and high costs of the necessary raw materials. Roughly 70% of all silicon materials come from China. This centralization makes prices prone to fluctuations from local disruptions in China or throughout the supply chain. Similarly, long shipping distances raise related prices for U.S. developers. Even if integrated circuit design becomes more accessible, these costs keep production inaccessible, slowing open-source devices’ growth. Similarly, industry giants may be unwilling to accept the open-source movement. While open-source designs open new revenue streams, these market leaders profit greatly from their proprietary resources. The semiconductor fabs supporting these large companies are even more centralized. It may be difficult for open-source hardware to compete if these organizations don’t embrace the movement.


How Should Developers Respond to AI?

“Unionizing against AI” wasn’t a specific goal, Quick clarified in an email interview with The New Stack. He’d meant it as an example of the level of just how much influence can come from a united community. “My main thought is around the power that comes with a group of people that are working together.” Quick noted what happened when the United Auto Workers went on strike. “We are seeing big changes happening because the people decided collectively they needed more money, benefits, etc. I can only begin to guess at what an AI-related scenario would be, but maybe in the future, it takes people coming together to push for change on regulation, laws, limitations, etc.” Even this remains a concept more than any tangible movement, Quick stressed in his email. “Honestly, I don’t have much more specific actions or goals right now. We’re just so early on that all we can do is guess.” But there is another scenario where Quick thinks community action would be necessary to push for change: the hot-button issue of “who owns the code.” 


Security, privacy, and generative AI

For many of the proposed applications in which LLMs should excel, delivering false responses can have serious consequences. Luckily, many of the mainstream LLMs have been trained on numerous sources of data. This allows these models to speak on a diverse set of topics with some fidelity. However, there is typically insufficient knowledge around specialized domains in which data is relatively sparse, such as deep technical topics in medicine, academia, or cybersecurity. As such, these large base models are typically further refined via a process called fine-tuning. Fine-tuning allows these models to achieve better alignment with the desired domain. Fine-tuning has become such a pivotal advantage that even OpenAI recently released support for this capability to compete with open-source models. With these considerations in mind, consumers of LLM products who want the best possible outputs, with minimal errors, must understand the data in which the LLM is trained (or fine-tuned) to ensure optimal usage and applicability.


How to keep remote workers connected to company culture

As important as workplace collaboration and communication tools are, technology alone can’t keep remote workers engaged with business objectives. Before the pandemic, auto finance firm Credit Acceptance centered its operations around in-person interactions in its offices, for which it got accolades; after COVID-19 arrived, the company’s 2,200 employees had to work remotely. “You didn't work from home at all – [only in] rare circumstances,” said Wendy Rummler, chief people officer at Credit Acceptance. “We considered our culture too important, [we believed that] we couldn't maintain it if we had a fully remote workforce, or even partially for that matter.” Fast forward a couple of years and the picture is markedly different now, with almost all staffers now fully remote. Internal pulse surveys have found that employee engagement has remained as high as before the pandemic, said Rummler. This is no accident, she said; Credit Acceptance deliberately set out to maintain its work culture without regular person-to-person interactions.


Should AI Require Societal Informed Consent?

The concept of societal informed consent has been discussed in engineering ethics literature for more than a decade, and yet the idea has not found its way into society, where the average person goes about their day assuming that technology is generally helpful and not too risky. In most cases, technology is generally helpful and not too risky, but not in all cases. As artificial intelligence grows more powerful and is applied to more new fields (many of which may be inappropriate), these cases will multiply. How will technology producers know when their technologies are not wanted if they never ask the public? ... One of the characteristics of a representative democracy is that -- at least in theory -- our elected officials are looking out for the well-being of the public. ... It is time for the government and the public to have a new conversation, one about technology -- specifically artificial intelligence. In the past we’ve always given technology the benefit of the doubt; tech was “innocent until proven guilty” and a long-time familiar phrase in and around Silicon Valley has been “it’s better to ask forgiveness, not permission.” We no longer live in that world.


Harnessing the potential of generative AI in marketing

Augmenting human creativity with the power of generative AI holds so much promise that the use cases we know now are only the tip of the proverbial iceberg. Companies that are looking to get a head start should, therefore, ensure that they have laid down the foundations for doing so. An important consideration in deploying generative AI is the availability of data. Contextualisation is a key benefit of generative AI and large language models (LLMs). But for enterprises with legacy, on-premise systems, their data is usually isolated within silos. Organisations looking to deploy generative AI solutions for their marketing efforts should leverage cloud data platforms to unify all their internal data. Aside from breaking down silos, businesses should also ensure seamless access to all their data. A lot of the data generated by marketing teams is either unstructured or semi-structured; such as social media posts, emails, and text documents, to name a few. Marketing teams should ensure that their cloud data platforms can load, integrate, and analyse all types of data.


Managing Missing Data in Analytics

Missing at Random (MAR) is a very common missing data situation encountered by data scientists and machine learning engineers. This is mainly because MCAR and MNAR-related problems are handled by the IT department, and data issues are addressed by the data team. MAR data imputation is a method of substituting missing data with a suitable value. Some commonly used data imputation methods for MAR are:In hot-deck imputation, a missing value is imputed from a randomly selected record coming from a pool of similar data records. In hot-deck imputation, the probabilities of selecting the data are assumed equal due to the random function used to impute the data. In cold-deck imputation, the random function is not used to impute the value. Instead, other functions, such as arithmetic mean, median, and mode, are used. With regression data imputation, for example, multiple linear regression (MLR), the values of the independent variables are used to predict the missing values in the dependent variable by using a regression model. Here, first the regression model is derived, then the model is validated, and finally the new values, i.e., the missing values, are predicted and imputed. 



Quote for the day:

"Failure isn't fatal, but failure to change might be" -- John Wooden

Daily Tech Digest - November 12, 2023

The metaverse has virtually disappeared. Here's why it's generative AI's fault

"It's basically going through the Gartner Hype Cycle for Emerging Technologies," she says. "We've had the hype and now we're seeing the reality. The metaverse was capturing people's imagination. But we're still looking for proven use cases that are going to generate value." Searle's assertion that the metaverse is suffering a familiar fate to other over-hyped technologies is certainly one explanatory factor for the drop in interest in the metaverse. But another huge contributory factor is the rapid rise of artificial intelligence (AI). ... Of course, the rapid take up of generative AI isn't the only narrative in this story; there's a whole series of potential concerns, such as hallucinations, plagiarism, and ethics, that need to be dealt with sooner rather than later. But if you want to impress your family and friends with a tool that seems to work like magic, then generative AI is the one. On the other hand, the metaverse -- just like the blockchain before it -- feels a bit like a rabbit that's stuck in a magician's hat. Entering the metaverse often isn't as easy as its proponents have promised. 


Why the service industry needs blockchain, explained

The difficulty of integrating blockchain with existing infrastructure and processes is a significant obstacle. Because service providers frequently use a variety of platforms and technologies, achieving seamless integration can be difficult. It might be difficult to protect data security and privacy while still adhering to regulations. Blockchain’s transparency conflicts with the requirement to protect sensitive customer information, necessitating careful design and implementation of privacy measures. Another major challenge is establishing communication and data exchange across various blockchain networks and traditional systems. To facilitate seamless interoperability, service providers need to spend time developing standardized protocols, which can be expensive and time-consuming. Moreover, there are scalability concerns. Blockchain networks, especially public ones, may face limitations in handling a high volume of transactions efficiently. Delays and higher expenses may result from this, especially in service industries where several quick transactions are necessary.


Why developer productivity isn’t all about tooling and AI

Creative work requires some degree of isolation. Each time they sit down to code, developers build up context for what they’re doing in their head; they play a game with their imagination where they’re slotting their next line of code into the larger picture of their project so everything fits together. Imagine you’re holding all this context in your head — and then someone pings you on Slack with a small request. All the context you’ve built up collapses in that instant. It takes time to reorient yourself. It’s like trying to sleep and getting woken up every hour. ... Another factor that gets in the way of developer productivity is a lack of clarity on what engineers are supposed to be doing. If developers have to spend time trying to figure out the requirements of what they’re building while they’re building it, they’re ultimately doing two types of work: Prioritization and coding. These disparate types of work don’t mesh. Figuring out what to build requires conversations with users, extensive research, talks with stakeholders across the organization and other tasks well outside the scope of software development. 


Here’s What a Software Architect Does in an Agile Team

An architect is probably not a valid role on an agile team. I admit I have at times been overzealous with non-coding members of a dev team. The less militant version of this is to be aware of ‘pigs’ and ‘chickens’ in the agile sense. When making breakfast, chickens lay eggs but pigs literally have skin in the game. So only pigs should attend daily agile stand ups. There are three problems with the role of architect in classic agile. Think of these as Lutheran protestant theses nailed to the door — or more likely to the planning wall.There are no upfront design phases in agile. “The best architectures, requirements, and designs emerge from self-organizing teams”. An architect cannot be an approver and cause of delay. This leads to the idea that architectural know-how should be spread out amongst the other team members. This is often the case — however it elides the fact that architectural responsibility doesn’t fall to anyone, even if people feel they may be accountable. Remember your RACI matrix. Should all agile developers be architects in a project? This makes little sense, since architecture describes a singular plan. 


AI’s Ability to Reason: Statistics vs. Logic

As a simplistic existence proof that today’s AI does not reason with logic, consider the following problem in basic algebra which was given to Bing/OpenAI GPT to solve. The gist of the problem shown in the figure below is that there are two rectangles, each having the same height (though this detail is not clearly stated in the sourcing 6th grade math text) but different widths. Areas for each are given. The rectangles are positioned in the corresponding math text to suggest that they may be aggregated into a larger rectangle having a width that is the sum of the widths of the smaller rectangles — maybe as a hint toward length. The request to find the length (height) and widths is a test to see whether OpenAI’s GPT via Bing would determine if there are sufficient equations matching unknowns. There aren’t. GPT didn’t discover the number of equations is one too few. Instead, it attempted to find length and widths, and it responded suggesting it had successfully solved the math problem. Everything started to go amuck when the insufficiency of the number of equations matched to the number of unknowns was missed, and the third equation given above simply is a function of the other two.


Security Is EVERYBODY’s Business, But CISOs Need to Lead

Cybersecurity is not an audit or internal audit. There is a fine line of difference there. And as much as the CISO is seen as somewhat more of an enforcer, they need to be seen as an enabler to the business. CISOs need to have very direct, effective, and transparent communication with the board members when it comes to quantification of everything that they’re doing. And when I say quantification, what I mean is quantification of risks to the organization. Some of the board members will be closer to cybersecurity risks. Some of them may be closer to a reputational risk or a financial risk. But if a CSO can stitch that story together and quantify it for the audience of the board, I think that goes a long way. That’s what’s needed because, in the situation in the market that we are in right now, with the threat landscape changing, with new capabilities coming into play, I think it’s critical. CISOs need to ensure the message is articulated well in the boardroom.


How Agile Managers Use Uncertainty to Create Better Decisions Faster

Here's the problem I see with big, long-term, and final management decisions: the decision is too large to have any certainty at all. Remember I said I don't take long consulting engagements? Early in my consulting career, I learned that even a “guaranteed” consulting project was not a guarantee at all. Sure, the client might pay a kill fee (a portion of the unused project budget), but most of the time, the client said (on a Friday afternoon), “Thanks. The world has changed. Don't come back on Monday.” While I always continued my marketing so my business would survive, I felt as if the clients cheated themselves. Because we thought we had more time, we didn't create smaller goals and achieve them. Our work was incomplete—according to their goals. And that's what people remember. Not that they changed the circumstances, but that we didn't finish. That's exactly what happens when managers try to decide for a long time without revisiting their decisions. The world changes. If the world changes enough, the managers feel the need to lay people off, not just stop efforts. Those layoffs are a result of too-long and too-large management decisions.


Technical and digital debt can devastate your digital ambition

Of course, no organisation can afford zero technical debt (is this even possible?). The judgement here is targeting existing technical debt in a priority order. Deciding what not to do is just as important as what to do. You will be better able to manage the high expectations of stakeholders, shape the transformation and prioritise investment when you have this insight. Ask yourself these questions:What technical debt will act as an anchor when trying to increase the pace of change, irrespective of how fast your new IT engineering and product-based approaches to change are? Or to put it another way, which single piece of technical debt will limit the flow of value, irrespective of how slick everything else is? To be able to adapt at pace, at short notice, responding to market opportunities, where is your underlying technology strong but resistant to change? Customers just expect your digital channels to work; where must you improve the reliability of your service? Where can you increase cost effectiveness or risk mitigation through targeted automation as one of the treatment strategies available to you?


How AI and Crypto is Transforming the Future of Decentralized Finance

As time has passed, the crypto industry has evolved into a breeding ground for fraudulent activities and deception. Safeguarding investors from fraud has become increasingly vital, especially with the influx of initial coin offerings and new platforms entering the market. The encouraging news is that AI and crypto can effectively prevent fraud attempts and ensure that investors adhere to financial compliance. AI bots, for example, can detect and flag fraudulent transactions, preventing them from proceeding unless confirmed by a human. Confirming crypto transactions often takes up to 24 hours due to reliance on consensus methods. However, cases of transaction delays often pose a challenge for the crypto sector. With some recent advancements in AI technology, there have been some enhanced trade management options. Some companies are adopting innovative consensus methods that significantly reduce transaction times to just a few seconds. This improvement holds potential benefits for the 


Secure Together: ATO Defense for Businesses and Consumers

First off, businesses need to take the lead in forming a stronger partnership with their customers. This means educating both customers and employees on proper security measures. Websites operating with user accounts, engaging individuals and corporations, often find themselves in the crosshairs of swindlers intent on ATO. We mentioned above that phishing is a common tactic. It’s imperative to consistently enlighten customers and employees about the looming menace of online security breaches like phishing, including how phishing attempts trick people and tips for not getting tangled. Adopt a vigilant stance on security by ingraining robust preventive protocols, including routine password updates and providing guidelines for safeguarding user credentials. ... Training does not end there. The MGM Resorts cyberattack we cited above also involved a fraudster tricking a customer support help desk. Businesses must train their staff on how to stop these attempted breaches — for example, by knowing how to ask questions that only a legitimate account holder could know the answer to.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - November 11, 2023

Mika becomes world's first robot CEO

In the era where many workers are worrying about artificial intelligence (AI) replacing their jobs, one company has announced that it is hiring the first humanoid robot chief executive officer (CEO). Dictador, a spirit brand based in Colombia’s Cartagena, has gone viral for appointing Mika, who is manifested as a robot. Mika is a research project between Hanson Robotics and Dictador. It has been customised to represent company value. Hanson Robotics also created Sophia, the popular humanoid robot. ... At a recent event, Mika said, “My presence on this stage is purely symbolic. In reality, conferring an honorary professor title upon me is a tribute to the greatness of the human mind in which the idea of artificial intelligence was born. It is also a recognition of the courage and open-mindedness of the owner of Dictador, who entrusted his company to a humble spokesperson with a processor instead of a heart.” Emphasising on how she is better than current CEO’s including Musk and Zuckerberg, she said, “In reality the notion of two powerful tech bosses having a cage fight is not a solution for improving the efficiency of their platforms”. 


Four Recommendations to Improve the Cyber Resilience Act

Policymakers must take a more proportionate, risk-based approach to determining the risk level of a product with digital elements and offer greater certainty for manufacturers to ascertain if a product is a critical one. While the Commission’s original proposal categorised every product in several broad categories as critical, the co-legislators have now the opportunity to take a more sophisticated approach. We recommend leveraging the Council’s risked-based approach with some key amendments, outlined here. ... it is crucial that the reporting obligations are aligned with the NIS 2 Directive to streamline reporting requirements and to avoid an unmanageable reporting burden for manufacturers and responsible authorities. This means that reporting under should be made to the CSIRTs under a single distributed reporting platform, and the incident reporting on security incidents should only concern “significant incidents”, as outlined in the European Parliament’s text.


What is a digital transformation strategy? Everything you need to know

At its most basic level, a DX strategy is the use of digital technologies to create or reimagine how customers are served and how work gets done. A well-thought-out and well-crafted digital transformation strategy ensures an organization correctly identifies what products, services and work need to be created or reimagined to remain competitive. For nonprofits or government agencies, this might mean effectively and efficiently delivering on their missions. ... A thoughtful DX strategy also focuses the organization's attention, said Kamales Lardi, author of The Human Side of Digital Business Transformation and CEO of Lardi & Partner Consulting. More specifically, it focuses the organization on the most pressing digital initiatives -- those that deliver value toward meeting its enterprise-wide goals. Lardi said this approach keeps teams from pursuing initiatives that introduce new technologies without understanding how they'll deliver value or implementing transformation projects that only help segments of the enterprise.


SolarWinds Fires Back at SEC Fraud Charges

“We categorically deny those allegations,” SolarWinds’ blog post said. “The company had appropriate controls in place before SUNBURST. The SEC misleadingly quotes snippets of documents and conversations out of context to patch together a false narrative about our security posture.” SolarWinds’ blog post details what it says are false claims that the attack exploited a VPN vulnerability. Other technical issues regarding the companies’ compliance with National Institute of Standards and Technology (NIST) cybersecurity standards framework (CSF) are also defended in the post. “The SEC is mixing apples and oranges, underscoring its lack of cybersecurity experience,” the blog post charged. “… the SEC fundamentally misunderstands what it means to follow the NIST CSF.” However much of the SEC’s complaint focuses on Brown’s alleged mishandling of controls that led to the breach. SEC contends that Brown in 2018 and 2019 stated "the current state of security leaves us in a very vulnerable state for our critical assets," and that "access and privilege to critical systems/data is inappropriate."


Software Architecture Fundamentals: Building the Foundations of Robust Systems

Solutions architecture is the bridge between business requirements and software solutions. Architects in this domain transform business needs into comprehensive software designs, often through diagrammatic representations. They also evaluate the commercial impacts of various technology choices. Software architecture, the centerpiece of our discussion, is closely aligned with software development. It not only impacts the structural composition of software but also influences the organization’s structure. Software architects play a pivotal role in translating business objectives into concrete software components and their responsibilities, all while ensuring the system’s healthy evolution over time. ... In a distributed architecture, systems must adopt self-preservation mechanisms:Avoid overloading a failing system. Excessive requests to a struggling system can exacerbate the situation. Recognize that a slow system is often worse than an offline system in terms of user experience. A system should have a way to assess its health. 


Building resilience-focused organizations

Arguably, the most important aspect of building resilient software system is automation. It effectively reduces human error, speeds up repetitive tasks, and guarantees consistent configurations. Through the automation of deployment, monitoring, and scaling processes, software systems can quickly adapt to evolving conditions and recover from failures more efficiently. In order to automate build commands, Amazon created a centralized, hosted build system called Brazil. The main functions of Brazil are compiling, versioning, and dependency management, with a focus on build reproducibility. Brazil executes a series of commands to generate artifacts that can be stored and then deployed. To deploy these artifacts, Apollo was created. Apollo was developed to reliably deploy a specified set of software artifacts across a fleet of instances. Developers define the process for a single host, and Apollo coordinates that update across the entire fleet of hosts. Developers could simply push-deploy their application to development, staging, and production environments. No logging into the host, no commands to run. 


What Are Data Sharing Agreements and Why Are They Important?

Before establishing data sharing agreements, it is crucial to have a clear understanding of their purpose and scope. These agreements serve as legal documents that outline the terms, conditions, and responsibilities of all parties involved in sharing data. By comprehending the purpose and scope, organizations can ensure that they establish agreements that effectively protect their interests and meet their objectives. The purpose of data sharing agreements is multifaceted. ... Several key factors must be considered: Data protection laws: Organizations must comply with data protection laws that govern the collection, storage, and sharing of personal information. Intellectual property rights: Data sharing agreements should address ownership rights of the shared data, including any intellectual property rights associated with it. Clear provisions on how the data can be used, reproduced, or modified should be included. Confidentiality and security: Agreements should outline measures to protect the confidentiality and security of shared data. This includes provisions for encryption, access controls, breach notification procedures, and liability for any breaches. 


Cyberattack Forces San Diego Hospital to Divert Patients

The attack on Tri-City Medical is among a rash of similarly disruptive ransomware and other cyber incidents that have been relentlessly hitting healthcare sector entities, including regional hospitals, in recent years, months and weeks. That includes an October ransomware attack on five hospitals in Ontario, Canada, and their shared IT services provider, which has been disrupting patient care at the facilities for several weeks and for which recovery work is expected to last into mid-December (see: Ontario Hospitals Expect Monthlong Ransomware Recovery). The Canadian hospitals have been directing many patients, including some cancer patients who need radiology treatment, to seek medical care elsewhere (see: 5 Ontario Hospitals Still Reeling From Ransomware Attack). A study released in January by the Ponemon Institute surveying 579 healthcare technology and security leaders says that patient care diversions due to ransomware are on the rise.
ther facilities, up from 65% the year before.


Sure, real-time data is now 'democratized,' but it's only a start

"With platforms taking complexity away from the individual user or engineer, it has accelerated adoption across the industry. Innovation such as SQL support, help make it democratized and provide ease of access to the vast majority rather than a select few." ... Many companies' infrastructures aren't ready, and neither are the organizations themselves. "Some yet to understand or see the value of real-time while others are all-in, with solutions that were designed for streaming throughout the organization," says Raikmo. "Combining datasets in motion with advanced techniques such as watermarking and windowing, is not a trivial matter. It requires correlating multiple streams, combining the data in memory and producing merged stateful result sets, at enterprise scale and resilience." The good news is not every bit of data needs to be streaming or delivered in real time. "Organizations often fall into the trap of investing in resources to make every data point they visualize be in real time, even when it is not necessary," Jayaprakash points out. "However, this approach can lead to exorbitant costs and become unsustainable."


AI is the future of cybersecurity. This is how to adopt it securely

Used effectively, AI can help prevent vulnerabilities from being written in the first place—radically transforming the security experience. AI provides context for potential vulnerabilities and secure code suggestions from the start (though please still test AI-produced code). These capabilities enable developers to write more secure code in real time and finally realize the true promise of “shift left.” This is revolutionary. Traditionally, “shift left” typically meant getting security feedback after you’ve brought your idea to code, but before deploying it to production. But with AI, security is truly built in, not bolted on. There’s no further way to “shift left” than doing so in the very place where your developers are bringing their ideas to code, with their AI pair programmer helping them along the way. It’s an exciting new era where generative AI will be on the front line of cyber defense. However, it’s also important to note that, in the same way that AI won’t replace developers, AI won’t replace the need for security teams. We’re not at Level 5 self-driving just yet. 



Quote for the day:

"Nobody can go back and start a new beginning, but anyone can start today and make a new beginning." -- Maria Robinson

Daily Tech Digest - November 10, 2023

The promise of collective superintelligence

The goal is not to replace human intellect, but to amplify it by connecting large groups of people into superintelligent systems that can solve problems no individual could solve on their own, while also ensuring that human values, morals and interests are inherent at every level. This might sound unnatural, but it’s a common step in the evolution of many social species. Biologists call the phenomenon Swarm Intelligence and it enables schools of fish, swarms of bees and flocks of birds to skillfully navigate their world without any individual being in charge. They don’t do this by taking votes or polls the way human groups make decisions. Instead, they form real-time interactive systems that push and pull on the decision-space and converge on optimized solutions. ... Can we enable conversational swarms in humans? It turns out, we can by using a concept developed in 2018 called hyperswarms that divides real-time human groups into overlapping subgroups. ... Of course, enabling parallel groups is not enough to create a Swarm Intelligence. That’s because information needs to propagate across the population. This was solved using AI agents to emulate the function of the lateral line organ in fish.


There's Only One Way to Solve the Cybersecurity Skills Gap

The plain truth is that it's not just a numbers game. Many of these roles are considered "hard to fill" because they are for specialist skill sets such as forensic analysis, security architecture, interpreting malicious code, or penetration testing. Or they're for senior roles with three to six years' experience. Even if companies recruit people with high potential but not the requisite background, it will take years for these recruits to upskill to reach a sufficient standard. Moreover, if we throw open the gates completely, we risk diluting the industry by introducing a whole swath of people with no technical skills. Yes, soft skills are valuable and in short supply too, but relying on these alone to fill the workforce gap does nothing to address the problem businesses have: a lack of trained, competent cybersecurity professionals, resulting, once again, in less resilience. Another major hurdle is that many organizations are reluctant to invest in training because the job market is so volatile. There's a fear that, by investing in new recruits, those staff members will become a flight risk and put themselves back into that talent pool. 


The Struggle for Microservice Integration Testing

Integration testing is crucial for microservices architectures. It validates the interactions between different services and components, and you can’t successfully run a large architecture of isolated microservices without integration testing. In a microservices setup, each service is designed to perform a specific function and often relies on other services to fulfill a complete user request. While unit tests ensure that individual services function as expected in isolation, they don’t test the system’s behavior when services communicate with each other. Integration tests fill this gap by simulating real-world scenarios where multiple services interact, helping to catch issues like data inconsistencies, network latency and fault tolerance early in the development cycle. Integration testing provides a safety net for CI/CD pipelines. Without comprehensive integration tests, it’s easy for automated deployments to introduce regressions that affect the system’s overall behavior. By automating these tests, you can ensure that new code changes don’t disrupt existing functionalities and that the system remains robust and scalable.


Google Cloud’s Cybersecurity Trends to Watch in 2024 Include Generative AI-Based Attacks

Threat actors will use generative AI and large language models in phishing and other social engineering scams, Google Cloud predicted. Because generative AI can create natural-sounding content, employees may struggle to identify scam emails through poor grammar or spam calls through robotic-sounding voices. Attackers could use generative AI to create fake news or fake content, Google Cloudwarned. LLMs and generative AI “will be increasingly offered in underground forums as a paid service, and used for various purposes such as phishing campaigns and spreading disinformation,” Google Cloud wrote. On the other hand, defenders can use generative AI in threat intelligence and data analysis. Generative AI could allow defenders to take action at greater speeds and scales, even when digesting very large amounts of data. “AI is already providing a tremendous advantage for our cyber defenders, enabling them to improve capabilities, reduce toil and better protect against threats,” said Phil Venables, chief information security officer at Google Cloud, in an email to TechRepublic.


OpenAI’s gen AI updates threaten the survival of many open source firms

The new API, according to OpenAI, is expected to provide new capabilities including a Code Interpreter, Retrieval Augmented Generation (RAG), and function calling to handle “heavy lifting” that would previously require developer expertise in order to build AI-driven applications. The Assistants API, specifically, may cause revenue losses for open source companies including LangChain, LLamaIndex, and ChromaDB, according to Andy Thurai, principal analyst at Constellation Research. “For organizations that want to standardize on OpenAI, the more their platform offers, the less organizations will need other frameworks such as Langchain and LlamaIndex. The new updates allow developers to create their applications within a single framework,” said David Menninger, executive director at Ventana Research. However, he pointed out that until the new features, such as the new API, are made generally available, enterprises will continue to put applications into production by relying on existing open source frameworks.


When net-zero goals meet harsh realities

There is a move towards greater precision and accountability at the non-governmental level, too. The principles of carbon emission measurement and reporting that underpin, for example, all corporate net-zero objectives tend to be agreed upon internationally by institutions such as the World Resources Institute and the World Business Council for Sustainable Development; in turn, these are used by bodies such as the SBTi and the CDP. Here too, standards are being rewritten, so that, for example, the use of carbon offsets is becoming less acceptable, forcing operators to buy carbon-free energy directly. With all these developments under way, there is a startling disconnect between many of the public commitments by countries and companies, and what most digital infrastructure organizations are currently doing or are able to do. ... The difference between the two surveys highlights a second disconnect. IBM’s findings, based on responses from senior IT and sustainability staff, show a much higher proportion of organizations collecting carbon emission data than Uptime’s.


CISOs Beware: SEC's SolarWinds Action Shows They're Scapegoating Us

The SEC had been trying to create accountability by holding a board accountable and liable for issues concerning cybersecurity incidents that inevitably occur from time to time. But now, in the case of SolarWinds, the SEC has turned around and directly gone after somebody who's only now the CISO. Brown wasn't the CISO when the breaches happened. He had been SolarWinds' VP of security and architecture and head of its information security group between July 2017 and December 2020, and he stepped into the role of CISO in January 2021. The result of the SEC's failure to mandate security leadership on corporate boards is that they've resorted to holding the CISO liable. This shift underscores a significant transformation in the CISO landscape. From my perspective as a CISO, it's increasingly clear that technical security expertise is an essential requirement for the role. Each day, CISOs are tasked with making critical decisions, such as approving or accepting timeline adjustments for security risks that have the potential to be exploited. 


Security in the impending age of quantum computers

The timeline for developing a cryptographically relevant quantum computer is highly contested, with estimates often ranging between 5 and 15 years. Although the date when such a quantum computer exists remains in the future, this does not mean this is a problem for future CIOs and IT professionals. The threat is live today due to the threat of “harvest now, decrypt later” attacks, whereby an adversary stores encrypted communications and data gleaned through classical cyberattacks and waits until a cryptographically relevant quantum computer is available to decrypt the information. To further highlight this threat, the encrypted data could be decrypted long before a cryptographically relevant quantum computer is available if the data is secured via weak encryption keys. While some data clearly loses its value in the short term, social security numbers, health and financial data, national security information, and intellectual property retain value for decades and the decryption of such data on a large scale could be catastrophic for governments and companies alike.


How the Online Safety Act will impact businesses beyond Big Tech

The requirements that apply to all regulated services, including those outside the special categories, are naturally the least onerous under the Act; however, because these still introduce new legal obligations, for many businesses these will require considering compliance through a new lens. ... Regulated services will have to conduct certain risk assessments at defined intervals. The type of risk assessments a service provider must conduct depends on the nature and users of the service.Illegal content assessment: all providers of regulated services must conduct a risk assessment of how likely users are to encounter and be harmed by illegal content, taking into account a range of factors including user base, design and functionalities of the service and its recommender systems, and the nature and severity of harm that individuals might suffer due to this content. ... all regulated services must carry out an assessment of whether the service is likely to be accessed by children, and if so they must carry out a children’s risk assessment of how likely children are to encounter and be harmed by content on the site, giving separate consideration to children in different age groups.


Enterprises vs. The Next-Generation of Hackers – Who’s Winning the AI Race?

Amidst a push for responsible AI development, major players in the space are on a mission to secure their tools from malicious use but bad actors have already started to take advantage of the same tech to boost their skill sets. Enterprises are increasingly finding new ways to integrate AI into internal workflows and external offerings, which in turn has created a new attack vector for hackers. This expanded surface has opened the door for a new wave of sophisticated attacks using advanced methods and unsuspecting entry points that enterprises previously didn’t have to secure against. ... Today’s threat landscape is transforming — hackers have tools at their fingertips that can rapidly advance their impact and an entirely new attack vector to explore. With growing enterprise use of AI offering an opportunity to expedite attacks, now is the time to focus on transforming security defenses. ... Despite scrutiny for its ability to equip cybercriminals with more advanced techniques, AI models can be used just as effectively among security and IT teams to mitigate these mounting threats. 



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer

Daily Tech Digest - November 09, 2023

MIT Physicists Transform Pencil Lead Into Electronic “Gold”

MIT physicists have metaphorically turned graphite, or pencil lead, into gold by isolating five ultrathin flakes stacked in a specific order. The resulting material can then be tuned to exhibit three important properties never before seen in natural graphite. ... “We found that the material could be insulating, magnetic, or topological,” Ju says. The latter is somewhat related to both conductors and insulators. Essentially, Ju explains, a topological material allows the unimpeded movement of electrons around the edges of a material, but not through the middle. The electrons are traveling in one direction along a “highway” at the edge of the material separated by a median that makes up the center of the material. So the edge of a topological material is a perfect conductor, while the center is an insulator. “Our work establishes rhombohedral stacked multilayer graphene as a highly tunable platform to study these new possibilities of strongly correlated and topological physics,” Ju and his coauthors conclude in Nature Nanotechnology


Conscientious Computing – Facing into Big Tech Challenges

The tech industry has driven incredibly rapid innovation by taking advantage of increasingly cheap and more powerful computing – but at what unintended cost? What collateral damage has been created in our era of “move fast and break things”? Sadly, it’s now becoming apparent we have overlooked the broader impacts of our technological solutions. As software proliferates through every facet of life and the scale of it increases, we need to think more about where this leads us from people, planet and financial perspectives. ... The classic Scope, Cost, Time pyramid – but often it’s the **observable ** functional quality that is prioritised. For that I’ll use a somewhat surreal version of an iceberg – as so much of technical (and effectively sustainability debt – a topic for a future blog) is hidden below the water line. Every engineering decision (or indecision) has ethical and sustainability consequences, often invisible from within our isolated bubbles. Just as the industry has had to raise its game on topics such as security, privacy and compliance, we desperately need to raise our game holistically on sustainability.


The CIO’s fatal flaw: Too much leadership, not enough management

So why does leadership get all the buzz? A cynic might suggest that the more respect doing-the-work gets, the more the company might have to pay the people who do that work, which in turn would mean those who manage the work would get paid more than those who think and charismatically express deep and inspirational thoughts. And as there are more people who do work than those who manage it, respecting the work and those who do it would be expensive. Don’t misunderstand. Done properly, leading is a lot of work, and because leading is about people, not processes or tools and technology; it’s time consuming, too. And in fact, when I conduct leadership seminars, the biggest barrier to success for most participants is figuring out and committing to their time budget. Leadership, that is, involves setting direction, making or facilitating decisions, staffing, delegating, motivating, overseeing team dynamics, engineering the business culture, and communicating. Leaders who are committed to improving at their trade must figure out how much time they plan to devote to each of these eight tasks, which is hard enough.


The Next IT Challenge Is All about Speed and Self-Service

One of the most significant roadblocks to rapid cloud adoption is sheer complexity. Provisioning a cloud environment involves dozens of dependent services, intricate configurations, security policies and data governance issues. The cognitive load on IT teams is significant, and the situation is exacerbated by manual processes that are still in place. The vast majority of engineering teams still depend on legacy ticketing systems to request IT for cloud environments, which adds a significant load on IT and also slows engineering teams. This slows down the entire operation, making it difficult for IT and engineering to support business needs effectively. In fact, in one study conducted by Rafay Systems, application developers at enterprises revealed that 25% of organizations reportedly take three months or longer to deploy a modern application or service after its code is complete. The real goal for any IT department is to support the needs of the business. Today, they do that better, faster and more cost-effectively by leveraging cloud technologies to realize all the business benefits of the modern applications being deployed.


The DPDP Act: Bolstering data protection & privacy, making India future-ready

The DPDP Act has a direct impact across industries. Organisations not only need to reassess their existing compliance status and gear up to cope with the new norms but also create a phased action plan for various processes. Moreover, if labeled as SDF, organisations also need to appoint a Data Protection Officer (DPO). In addition, organisations need to devise appropriate data protection and privacy policy framework in alignment with the DPDP Act. Further, consent forms and mechanisms have to be developed to ensure standard procedures as laid out in the legislation. Companies have to additionally invest to adopt the necessary changes in compliance with the law. They need to list down their third-party data handlers, consent types and processes, privacy notices, contract clauses, categorise data, and develop breach management processes. Sharing his perspective on the DPDP Act, Amit Jaju, Senior Managing Director, Ankura Consulting Group (India) says, “The Digital Personal Data Protection Act 2023 has ushered in a new era of data privacy and protection, compelling solution providers to realign their business strategies with its mandates. 


Will AI hurt or help workers? It's complicated

Here's what is certain: CIOs see AI as being useful, but not replacing higher-level workers. JetRockets recently surveyed US CIOs. In its report, How Generative AI is Impacting IT Leaders & Organizations, the custom-software firm found that CIOs are primarily using AI for cybersecurity and threat detection (81%), with predictive maintenance and equipment monitoring (69%) and software development / product development (68%) in second and third place, respectively. Security, you ask? Yes, security. CrowdStrike, a security company, sees a huge demand building for AI-based security virtual assistants. A Gartner study on virtual assistants predicted, "By 2024, 40% of advanced virtual assistants will be industry-domain-specific; by 2025, advanced virtual assistants will provide advisory and intervention roles for 30% of knowledge workers, up from 5% in 2021." By CrowdStrike's reckoning, AI will "help organizations scale their cybersecurity workforce by three times and reduce operating costs by close to half a million dollars." That's serious cash.


From Chaos to Confidence: The Indispensable Role of Security Architecture

Beyond mere firefighting, security architecture embraces the proactive art of strategic defense. It takes a risk-based approach to identifying potential threats, assessing weak points in an organization's IT stack, architecting forward-looking designs and prioritizing security initiatives. By aligning security investments with the organization's risk tolerance and business priorities, security architecture ensures that precious resources are optimally allocated for maximum security defense designed with in-depth zero trust security principles in mind. This reduces enterprise application deployment and operational security costs. It is similar to designing high-rise buildings in a standard manner, following all safety codes and by-laws while still allowing individual apartment owners to design and create their homes as they would prefer. Cyberattacks have become increasingly sophisticated and frequent. As a result, it is imperative for defense systems to have comprehensive, purpose-built architectures and designs in place to protect against such threats. Security architecture provides a complete defense framework by integrating various security components


Top 5 IT disaster scenarios DR teams must test

Failed backups are some of the most frequent IT disasters. Businesses can replace hardware and software, but if the data and all backups are gone, bringing them back might be impossible or incredibly expensive. Sys admins must periodically test their ability to restore from backups to ensure backups are working correctly and the restore process does not have some unseen fatal flaw. At the same time, there should always be multiple generations of backups, with some of those backup sets off site. ... Hardware failure can take many forms, including a system not using RAID, a single disk loss taking down a whole system, faulty network switches and power supply failures. Most hardware-based IT disaster scenarios can be mitigated with relative ease, but at the cost of added complexity and a price tag. One example is a database server. Such a server can be turned into a database cluster with highly available storage and networking. The cost for doing this would easily double the cost of a single nonredundant server. Administrators would also have to undergo training to manage such an environment.


Mastering AI Quality: Strategies for CDOs and Tech Leaders

Most chief data officers (CDOs) work hard to make their data operations into “glass boxes” --transparent, explainable, explorable, trustworthy resources for their companies. Then comes artificial intelligence and machine learning (AI/ML), with their allure of using that data for ever-more impressive strategic leaps, efficiencies, and growth potential. However, there’s a problem. Nearly all AI/ML tools are “black boxes.” They are so inscrutable even their creators are concerned about how they produce their results. The speed and depth at which these tools can process data without human intervention or input presents a danger to technology leaders seeking control of their data and who want to ensure and verify the quality of analytics that use it. Combine this with a push to remove humans from the decision loop and you have a potent recipe for decisions to go off the rails. ... With a human collaborator or a human-designed algorithm, it is generally easy to elicit a meaningful response to the question, “Why is this result what it is?” With AI -- and generative AI in particular -- that may not be the case.


Revamping IT for AI System Support

“It’s important for everybody to understand how fast this [AI] is going to change,” said Eric Schmidt, former CEO and chairman of Google. “The negatives are quite profound.” Among the concerns is that AI firms still had “no solutions for issues around algorithmic bias or attribution, or for copyright disputes now in litigation over the use of writing, books, images, film, and artworks in AI model training. Many other as yet unforeseen legal, ethical, and cultural questions are expected to arise across all kinds of military, medical, educational, and manufacturing uses.” The challenge for companies and for IT is that the law always lags technology. There will be few hard and fast rules for AI as it advances relentlessly. So, AI runs the risk of running off ethical and legal guardrails. In this environment, legal cases are likely to arise that define case law and how AI issues will be addressed. The danger for IT and companies is that they don’t want to be become the defining cases for the law by getting sued. CIOs can take action by raising awareness of AI as a corporate risk management concern to their boards and CEOs.



Quote for the day:

"Holding on to the unchangeable past is a waste of energy and serves no purpose in creating a better future." -- Unknown