Showing posts with label digital risk. Show all posts
Showing posts with label digital risk. Show all posts

Daily Tech Digest - November 05, 2023

Less Code Alternatives to Low Code

Embracing a “minimalist coding” philosophy is foundational. It’s anchored in a gravitation toward clarity, prompting you to identify the indispensable elements in your code, and then discard the rest. Is there a more succinct solution? Can a tool achieve this outcome with less code? Am I building something unique and valuable or rehashing solved problems? Every line of code must be viewed for the potential value it delivers and the future burden it represents. Reduce that burden by avoiding or removing code when you can and leveraging the work of others. ... Modern frameworks offer a significant enhancement to development productivity, primarily by reducing the amount of code written to perform common tasks. Additionally, the underlying code of the framework is tested and maintained by the community, alleviating peripheral maintenance burdens. The same goes for code generators; they’re not merely about avoiding repetitive keystrokes, but about ensuring that the generated code itself is consistent and efficient. 


Software Deployment Security: Risks and Best Practices

Blue-Green Deployment is a release management strategy designed to reduce downtime and risk by running two identical production environments, known as Blue and Green. At any time, only one of these environments is live, with the live environment serving all production traffic. The primary security implication of the Blue-Green deployment is the risk of data inconsistency during the switchover. If not properly managed, sensitive data could be exposed, lost or corrupted. Furthermore, because two environments are maintained, security measures must be duplicated, potentially leading to inconsistencies and vulnerabilities if not properly managed. ... Canary deployment is a strategy where new software versions are gradually rolled out to a small subset of users before being deployed to the entire infrastructure. This strategy allows teams to test and monitor the performance of the new release in a live environment with less risk. Canary deployment can potentially expose new software versions to a smaller user base, potentially exposing vulnerabilities before a full-scale release. If a vulnerability is exploited during this stage, it could lead to a security breach affecting a subset of users. 


It’s time to take your genAI skills to the next level

The workforce of the future will learn AI in school and during the next 15 years, each successive generation of graduates will likely have much stronger AI kung fu than the last. In fact, my own son owns a Silicon Valley based startup called Chatterbox, which exists to teach AI literacy to kids as young as eight years old. Learning AI at that age is unimaginable to adults currently in the workforce. Young workers entering the workforce will have a vastly superior knowledge of, and ability with, AI than the workforce that went to school before the LLM-based genAI revolution of 2022 and 2023. That’s why one of the smartest things you can do now, regardless of your specific occupation, is to get very serious about learning a lot more about genAI. “Prompt engineering” — the ability to use words to get output from genAI tools — is the skill of the year. But it’s only a matter of time before basic proficiency in prompt engineering becomes commonplace and banal. It’s important to set yourself apart from the crowd by going further and really studying how generative AI works, its limitations and potentialities, and the ethical and legal issues are around its output.


Why digital banking is a crucial financial literacy skill for kids

By starting early and providing guidance, parents and educators play a crucial role in helping children develop strong financial literacy skills. Digital banking not only enables children to understand the mechanics of money but also fosters a healthy relationship with finances. For instance, some innovative neobanks in India are currently providing prepaid cards that are exceptionally user-friendly and intuitive. These cards offer a unique opportunity for children to develop crucial financial literacy skills, such as prudent money management, efficient budgeting, and smart savings habits. ... The positive impact of early financial education, including digital banking literacy, on long-term financial well-being cannot be overstated. Introducing children to digital banking at a young age provides them with the knowledge and skills needed to make informed financial decisions throughout their lives. It not only equips them with the tools to navigate the cashless economy effectively but also fosters financial independence, responsibility, and resilience in the face of evolving financial challenges.


Mastering a multi-cloud environment

It is essential to understand the challenges that exist while creating a robust multi-cloud architecture. You need to incorporate the right set of tools and technologies to support workload placement across diverse platforms and services. A solid operating model to effectively manage multi-cloud use is imperative – breaking it down into process security, technology, financial operations and people and skills. One of the keys is aligning IT service management with your multi-cloud operating model – implementing the right technology to effectively operate, manage, monitor and secure resources and services among providers – from data management, governance and security to vendor licenses, contracts and more. ... In today’s fast-changing and threat-laden environment, a new approach to resilience is indispensable – one that helps ensure your ability to ‘bounce back’ quickly from disruptions and maintain application availability. New functional capabilities and skills to embed resilience through design is the way forward and it will likely require businesses to give resilience greater priority as they invest in innovation.


Do we have enough GPUs to manifest AI’s potential?

The current production and availability of GPUs is insufficient to manifest AI’s ever-evolving potential. Many businesses face challenges in obtaining the necessary hardware for their operations, dampening their capacity for innovation. As manufacturers continue ramping up GPU unit production, many companies are already being hobbled by GPU accessibility. According to Fortune, OpenAI CEO Sam Altman privately acknowledged that GPU supply constraints were impacting the company’s business. ... Exploring alternative hardware to power AI applications presents a viable route for organizations striving for efficient processing. Depending on the specific AI workload requirements, CPUs, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) may be excellent alternatives. FPGAs, which are known for their customizable nature, and ASICs, specifically designed for a particular use case, both have the potential to effectively handle AI tasks. However, it’s crucial to note that these alternatives might exhibit different performance characteristics and trade-offs.


The state of API security in 2023

Only 38% of organizations have solutions that enable them to understand the context between API activities, user behaviors, data streams, and code execution. In hyper-connected digital ecosystems, understanding this data is crucial. An anomaly in user behavior or a suspicious data flow might be early indicators of a breach attempt or a vulnerability exploitation. Moreover, the capability to tailor security responses based on dynamic threat parameters is indispensable. While generalized security protocols can thwart common threats, customized defenses based on threat actors, compromised tokens, IP abuse velocity, geolocations, IP ASNs, and specific attack patterns can be the difference between a repelled threat and a security breach. Yet most organizations do not have this capability. Lastly, companies continue to overlook the need to monitor and understand the communication patterns between API endpoints and application services. An API might be functioning as intended, but if its communication pattern is anomalous or its interactions with other services are unexpected, it could be an indicator of underlying vulnerabilities or misconfigurations.


AI Safety? Rishi Sunak is all in for Elon Musk's work-free fantasy

“There will come a point where no job is needed,” Musk said. “You can have a job if you want to have a job, for personal satisfaction, but the AI will be able to do everything.” In this world, everyone would have what they wanted: “Not a universal basic income, we'll have universal high income.” Musk didn’t give any ideas on how that world will appear, either because he didn’t think he had to, or he didn’t want to face up to the idea that billionaires might have to learn to share. Instead, he cited the Civilisation science fiction novels of Iain Banks, which really just do the same thing better, placing people in a future quasi-Utopia, without giving any suggestion how society transferred from capitalism to a world of freely available high-tech. The real frustration of the AI Safety Summit, on display in the Musk-Sunak show, is that knowledge is power. Musk and the tech billionaires have both, while our elected representatives have neither [even if Sunak and his wife are personally close to billionaire status themselves].

Digital risk: Time to merge cyber security and data privacy

Taking an integrated business approach to managing digital risk delivers a number of key benefits to organisations. Firstly, it can help to bring forward digital transformation initiatives because the data classification and compliance that companies are undertaking across the business for various purposes is aligned and coordinated. Secondly, a digital risk function that conducts comprehensive assessments of third-party and supply chain digital risk is better positioned to ensure that risk is considered across the organisation. One way to do this is by pre-approving vendors from a risk perspective. “Businesses can digitally transform quicker if they do the supplier approval process up front,” says James Arthur, Partner, Head of Cyber Consulting, Grant Thornton UK. “It’s a lot easier to do this if you have a single digital risk function that proactively assesses cyber security and privacy risk together.” Thirdly, businesses continue to use new technologies to seek out commercial advantage, meaning their approach to data privacy and cyber security also needs to continually evolve, to address new threats and vulnerabilities. 


Where Does Cybersecurity Fit Into the Acquisition Process?

Acquiring and integrating an outside company also means inheriting a brand-new set of cybersecurity risks -- both direct and third-party. “If we make an acquisition, a lot of our customers will request to gain some understanding of the security of the company [we] acquired,” Huber explains. How will a company manage those newly acquired risks? Answering that question takes time and comes with a learning curve. Due diligence plays a big role in uncovering those risks, but the possibility that an unknown risk will emerge following the closing of a deal is almost certain. “I think that is always going to happen,” says Huber. “It’s not [a challenge] you can really plan for other than knowing that something’s going to happen.” Acquisitions can take months or quarters from deal consideration to closing. The first part of that process involves vetting the potential fit from business and technical perspectives. Once an acquisition appears to be a promising fit, the acquiring organization must go through its entire due diligence playbook to understand the opportunities and risks associated with its target.



Quote for the day:

"If it wasn't hard, everyone would do it. The hard is what makes it great." -- Tom Hanks

Daily Tech Digest - April 21, 2023

A team of ex-Apple employees wants to replace smartphones with this AI projector

It's a seamless blend of technology and human interaction that Humane believes can extend to daily schedule run-downs, seeing map directions, and receiving visual aids for cooking or when fixing a car engine -- as suggested by the company's public patents. The list goes on. Chaudhri also demoed the wearable's voice translator which converted his English into French while using an AI-generated voice to retain his tone and timbre, as reported by designer Michael Mofina, who watched the recorded TED Talk before it was taken down. Mofina also shared an instance when the wearable was able to recap the user's missed notifications without sounding invasive, framing them as, "You got an email, and Bethany sent you some photos." Perhaps the biggest draw to Humane and its AI projector is the team behind it. That roster includes Chaudri, a former Director of Design at Apple who worked on the Mac, iPod, iPhone, and other prominent devices, and Bethany Bongiorno, also from Apple and was heavily involved in the software management of iOS and MacOS.


Three issues with generative AI still need to be solved

Generative AI uses massive language models, it’s processor-intensive, and it’s rapidly becoming as ubiquitous as browsers. This is a problem because existing, centralized datacenters aren’t structured to handle this kind of load. They are I/O-constrained, processor-constrained, database-constrained, cost-constrained, and size-constrained, making a massive increase in centralized capacity unlikely in the near term, even though the need for this capacity is going vertical. These capacity problems will increase latency, reduce reliability, and over time could throttle performance and reduce customer satisfaction with the result. The need is for more of a more hybrid approach where the AI components necessary for speed are retained locally (on devices) while the majority of the data resides centrally to reduce datacenter loads and decrease latency. Without a hybrid solution — where smartphones and laptops can do much of the work — use of the technology is likely to stall as satisfaction falls, particularly in areas such as gaming, translation, and conversations where latency will be most annoying.


Exploring The Incredible Capabilities Of Auto-GPT

The first notable application is code improvement. Auto-GPT can read, write and execute code and thus can improve its own programming. The AI can evaluate, test and update code to make it faster, more reliable, and more efficient. In a recent tweet, Auto-GPT’s developer, Significant Gravitas, shared a video of the tool checking a simple example function responsible for math calculations. While this particular example only contained a simple syntax error, it still took the AI roughly a minute to correct the mistake, which would have taken a human much longer in a codebase containing hundreds or thousands of lines. ... The second notable application is in building an app. Auto-GPT detected that Varun Mayya needed the Node.js runtime environment to build an app, which was missing on his computer. Auto-GPT searched for installation instructions, downloaded and extracted the archive, and then started a Node server to continue with the job. While Auto-GPT made the installation process effortless, Mayya cautions against using AI for coding unless you already understand programming, as it can still make errors.


The Best (and Worst) Reasons to Adopt OpenTelemetry

Gathering telemetry data can be a challenge, and with OpenTelemetry now handling essential signals like metrics, traces and logs, you might feel the urge to save your company some cash by building your own system. As a developer myself, I totally get that feeling, but I also know how easy it is to underestimate the effort involved by just focusing on the fun parts when kicking off the project. No joke, I’ve actually seen organizations assign teams of 50 engineers to work on their observability stack, even though the company’s core business is something else entirely. Keep in mind that data collection is just a small part of what observability tools do these days. The real challenge lies in data ingestion, retention, storage and, ultimately, delivering valuable insights from your data at scale. ... At the very least, auto-instrumentation will search for recognized libraries and APIs and then add some code to indicate the start and end of well-known function calls. Additionally, auto-instrumentation takes care of capturing the current context from incoming requests and forwarding it to downstream requests.


OpenAI’s hunger for data is coming back to bite it

The Italian authority says OpenAI is not being transparent about how it collects users’ data during the post-training phase, such as in chat logs of their interactions with ChatGPT. “What’s really concerning is how it uses data that you give it in the chat,” says Leautier. People tend to share intimate, private information with the chatbot, telling it about things like their mental state, their health, or their personal opinions. Leautier says it is problematic if there’s a risk that ChatGPT regurgitates this sensitive data to others. And under European law, users need to be able to get their chat log data deleted, he adds. OpenAI is going to find it near-impossible to identify individuals’ data and remove it from its models, says Margaret Mitchell, an AI researcher and chief ethics scientist at startup Hugging Face, who was formerly Google’s AI ethics co-lead. The company could have saved itself a giant headache by building in robust data record-keeping from the start, she says. Instead, it is common in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing the work of removing duplicates or irrelevant data points, filtering unwanted things, and fixing typos.


Executive Q&A: The State of Cloud Analytics

Many businesses are trying hard right now to stay profitable during these times of economic uncertainty. The startling takeaway to us was that business and technical leaders see cloud analytics as the tool -- not a silver bullet, but a critical component -- for staying ahead of the pack in the current economic climate. Not only that, organizations need to do more with less and, as it turns out, cloud analytics is not only a wise investment during good economic times, but also in more challenging economic times. Businesses reap benefits from the same solution (cloud analytics) in either scenario. For example, cloud analytics is typically more cost-effective than on-premises analytics solutions because it eliminates the need for businesses to invest in expensive hardware and IT infrastructure. It also offers the flexibility businesses need to quickly experiment with new data sources, analytics tools, and data models to get better insights -- without having to worry about the underlying infrastructure.


AI vs. machine learning vs. data science: How to choose

It's a common topic for organizational leaders—they want to be able to articulate the core differences between AI, machine learning (ML), and data science (DS). However, sometimes they do not understand the nuances of each and thus struggle to strategize their approach to things such as salaries, departments, and where they should allocate their resources. Software-as-a-Service (SaaS) and e-commerce companies specifically are being advised to focus on an AI strategy without being told why or what that means exactly. Understanding the complexity of the tasks you aim to accomplish will determine where your company needs to invest. It is helpful to quickly outline the core differences between each of these areas and give better context to how they are best utilized. ... To decide whether your company needs to rely on AI, ML, or data science, focus on one principle to begin: Identify the most important tasks you need to solve and let that be your guide.


The strong link between cyber threat intelligence and digital risk protection

ESG defined cyber threat intelligence as, “evidence-based actionable knowledge about the hostile intentions of cyber adversaries that satisfies one or several requirements.” In the past, this definition really applied to data on IoCs, reputation lists (e.g., lists of known bad IP addresses, web domains, or files), and details on TTPs. The intelligence part of DRP is intended to provide continuous monitoring of things like user credentials, sensitive data, SSL certificates, or mobile applications, looking for general weaknesses, hacker chatter, or malicious activities in these areas. For example, a fraudulent website could indicate a phishing campaign using the organization’s branding to scam users. The same applies for a malicious mobile app. Leaked credentials could be for sale on the dark web. Bad guys could be exchanging ideas for a targeted attack. You get the picture. It appears from the research that the proliferation of digital transformation initiatives is acting as a catalyst for threat intelligence programs. When asked why their organizations started a CTI program, 38% said “as a part of a broader digital risk protection effort in areas like brand reputation, executive protection, deep/dark web monitoring, etc.”


4 perils of being an IT pioneer

An enterprise-wide IT project is deemed successful only when a team member at the lowest level of the hierarchy adopts it. Ensuring adoption of any new solution is always a challenge. More so a solution based on a new technology. There’s push back from end users because they find the idea of losing power or skills in the face of new technology disconcerting. For any IT leader, crossing this mental inertia is always among the toughest challenges. Moreover, IT leaders have seen many initiatives based on new technologies fail because there was no buy-in from the company’s top leadership. Even if users adopt the new technology, the initially learning curve is often steep, impacting productivity. Most organizations can’t afford or aren’t ready to accept the temporary revenue loss due to the disruption caused by the new technology. Therefore, business and IT leaders must have a clear understanding of the risk/reward principle when rolling out new tech. Buy-in from top management as a top-down mandate can make adoption of new technology easier.


Is Generative AI an Enterprise IT Security Black Hole?

Shutting the door on generative AI might not be a possibility for organizations, even for the sake of security. “This is the new gold rush in AI,” says Richard Searle, vice president of confidential computing at Fortanix. He cited news of venture capital looking into this space along with tech incumbents working on their own AI models. Such endeavors may make use of readily available resources to get into the AI race fast. “One of the important things about the way that systems like GPT-3 were trained is that they also use common crawl web technology,” Searle says. “There’s going to be an arms race around how data is collected and used for training.” That may also mean increased demand for security resources as the technology floods the landscape. “It seems like, as in all novel technologies, what’s happening is the technology is racing ahead of the regulatory oversight,” he says, “both in organizations and the governmental level.”



Quote for the day:

"Our chief want is someone who will inspire us to be what we know we could be." -- Ralph Waldo Emerson