Daily Tech Digest - March 27, 2023

Primary Reasons That Deteriorate Digital Transformation Initiatives

Fallacious organizational culture can readily fail transformation initiatives. Adopting better cultural changes is as essential as digital transformation. Hence, businesses must embrace the cultural differences that digital transformation demands. IT initiatives require amendments in products, internal processes, and better engagement with customers. As per a recent report by Tek Systems, “State Of Digital Transformation,” 46% of businesses believe digital transformations enhance customer experience and engagement. To achieve these factors, involving teams from various departments is an excellent way to work together more coherently. Not having a collaborative culture across enterprises can be a significant reason for digital transformation failures. Establishing a change management process is recommended to bring the needed cultural change. This process can help identify people actively resistant to change, followed by adequate training and education, and transform them to adopt the cultural difference quickly.


Why data leaders struggle to produce strategic results

The top impediment? Skills and staff shortages. One in six (17%) survey respondents said talent was their biggest issue, while 39% listed it among their top three. And the tight talent pool isn’t helping, Medeiros says. “CDAOs must have a talent strategy that doesn’t count on hiring data and analytics talent ready-made.” To counter this, CDAOs need to build a robust talent management strategy that includes education, training, and coaching for data-driven culture and data literacy, Medeiros says. That strategy must apply not only to the core data and analytics team but also the broader business and technology communities in the organization. ... Strategic missteps in realizing data goals may signal an organizational issue at the C-level, with company leaders recognizing the importance of data and analytics but falling short on making the strategic changes and investments necessary for success. According to a 2022 study from Alation and Wakefield Research, 71% of data leaders said they were “less than very confident” that their company’s leadership sees a link between investing in data and analytics and staying ahead of the competition.


A Roadmap For Transitioning Into Cybersecurity

The best way to discover information is to search for specific points related to the categories above. For example, if I'm looking at SQL injection vulnerabilities, I would specifically input that into Google and try to learn as much as I can about SQL injection. I don't recommend relying on just one resource to learn everything. This is where you need to venture out on your own and do some research. I can only provide examples of what good material looks like. In the beginning, your approach will likely be more theoretical, but I firmly believe that the most effective way to learn is through practical experience. Therefore, you should aim to engage in hands-on activities as much as possible. ... Writing blog posts helps you solidify your understanding of a topic, improve your communication skills, and build an online portfolio that showcases your expertise. Opinion: Producing blog posts is an excellent way to engage with the community, share your knowledge, and give back. Plus, it helps you establish a personal brand and network with like-minded professionals.


What Are Microservices Design Patterns?

Decomposition patterns are used to break down large or small applications into smaller services. You can break down the program based on business capabilities, transactions, or sub-domains. If you want to break it down using business capabilities, you will first have to evaluate the nature of the enterprise. As an example, a tech company could have capabilities like sales and accounting, and each of these capabilities can be considered a service. Decomposing an application by business capabilities may be challenging because of God classes. To solve this problem, you can break down the app using sub-domains. ... One observability pattern to discuss is log aggregation. This pattern enables clients to use a centralized logging service to aggregate logs from every service instance. Users can also set alerts for specific texts that appear in the logs. This system is essential since requests often spam several service instances. The third aspect of observability patterns is distributed tracing. This is essential since microservice architecture requests cut across different services. This makes it hard to trace end-to-end requests when finding the root causes of certain issues.


A New Field of Computing Powered by Human Brain Cells: “Organoid Intelligence”

It might take decades before organoid intelligence can power a system as smart as a mouse, Hartung said. But by scaling up production of brain organoids and training them with artificial intelligence, he foresees a future where biocomputers support superior computing speed, processing power, data efficiency, and storage capabilities. ... Organoid intelligence could also revolutionize drug testing research for neurodevelopmental disorders and neurodegeneration, said Lena Smirnova, a Johns Hopkins assistant professor of environmental health and engineering who co-leads the investigations. “We want to compare brain organoids from typically developed donors versus brain organoids from donors with autism,” Smirnova said. “The tools we are developing towards biological computing are the same tools that will allow us to understand changes in neuronal networks specific for autism, without having to use animals or to access patients, so we can understand the underlying mechanisms of why patients have these cognition issues and impairments.”


'Critical gap': How can companies tackle the cybersecurity talent shortage?

The demand for cybersecurity is on the rise with no signs of it slowing down anytime soon. The cybersecurity talent shortage is a challenge, but that doesn't mean it has to be a problem. Companies today are taking critical steps to bridge this gap through innovative ways. It is imperative to deploy skilled data security professionals who can focus on critical thinking and innovation, allowing the automated bots to take over the tedious, repetitive tasks. With this, companies can predict and be in front of even the most sophisticated cyber-attacks without them having to hire more manpower to account for them. ... Cybersecurity is critical to the economy and across industries. This field is a good fit for professionals looking to solve complex problems and navigate the different aspects of client requirements. The first step to building a career in data security is by entering the tech workforce. Pursuing an associate degree, bachelor’s degree, or online cybersecurity degree should create a smooth gateway to the sector.


The Economics, Value and Service of Testing

Clearly writing tests is additional to getting the code correct from the start, right? If we could somehow guarantee getting the code correct from the start, you could argue we wouldn’t need the written tests. But, then again, if we somehow had that kind of guarantee, performing tests wouldn’t matter much either. The trick is that all testing is in response to imperfection. We can’t get the code “correct” the first time. And even if we could, we wouldn’t actually know we did until that code was delivered to users for feedback. And even then we have to allow for the idea that notions of quality can be not just objective, but subjective. ... When people make arguments against written tests, they are making (in part) an economical argument. But so are those who are making a case for written tests. When framed this way, people can have fruitful discussions about what is and isn’t economical but backed up by judgment. Judgment, in this context, is all about assigning value. Economists see judgment as abilities related to determining some payoff or reward or profit. Or, to use the term I used earlier, a utlity.


Operation vs. innovation: 3 tips for a win-win

One of the biggest innovation inhibitors is the organizational silo. When IT and business teams don’t collaborate or communicate, IT spins its wheels on projects that aren’t truly aligned with business goals. When IT teams are simply trying to “keep the lights on,” a lack of alignment between business leaders  and IT (who may not fully understand how projects support business objectives) is a recipe for failure. On the technology side, innovations like low-code and no-code applications are helping to bridge the technical and tactical gap between business and IT teams. With no-code solutions, business users can build apps and manage and change internal workflows and tasks without tapping into IT resources. IT governance and guardrails over these solutions are important, but they free up time for IT and software teams to work on higher-level innovation. ... Internal IT teams are often equipped with and experienced in maintaining operational excellence. Depending on the company, these internal teams may also excel at building new solutions and applications from the ground up.


Cloud Skills Gap a Challenge for Financial Institutions

“While the cloud seems like a very simple technology, that’s not the case,” he says. “Not knowing the cloud default configurations and countermeasures that should be taken against it might keep your application wide open.” Siksik adds that changes in the cloud also happen quite frequently and with less level of control over changes than the bank normally has. “The cloud is open to many developers and DevOps, which could push a change without the proper change process, as things are more dynamic,” he explains. “This mindset is new to banks, where normally you will have strict and long change processes.” James McQuiggan, security awareness advocate at KnowBe4, explains cloud architects design and oversee the bank's cloud infrastructure implementation and need experience in cloud computing platforms and knowledge of network architecture, security, and compliance. “The security specialists are to ensure the bank's cloud environment is secure and compliant with any applicable regulatory requirements,” he adds.


The era of passive cybersecurity awareness training is over

Justifying the need for cybersecurity investment to the executive team may be challenging for tech leaders. Compared to other business functions, the return from investing in IT security could be more apparent to executives. However, the importance of investing in a strong security posture becomes more evident when compared to the damage from data breaches and ransomware attacks. By highlighting savings in terms of improved quality of execution of cybersecurity policies and improved IT productivity through automation, it becomes easier to articulate the value of cybersecurity initiatives to the executive team. Modern social engineering attacks often use a combination of communication channels such as email, phone calls, SMS, and messengers. With the recent theft of terabytes of data, attackers are increasingly using this information to personalize their messaging and pose as trusted organizations. In this context, organizations can no longer rely on a passive approach to cybersecurity awareness training. 



Quote for the day:

"Leadership is being the first egg in the omelet." -- Jarod Kintz

Daily Tech Digest - March 26, 2023

What Is Decentralized Identity?

Decentralized identities are not hosted on centralized servers by big entities such as Google or Meta Platforms (the former Facebook). Instead, they are often hosted on decentralized file-sharing platforms, such as the InterPlanetary File System (IPFS). These open-source protocols store data on decentralized networks that are difficult to shut down and give users ownership over their online data. In addition, decentralized identities only share information with other parties when and if they choose. This means that, unlike centralized identities, personal data cannot be stored or shared without the user's knowledge or consent. According to the Ethereum Foundation, decentralized identities can be used for many things, such as a universal login to reduce the need for separate usernames passwords, as a way to bypass know-your-customer (KYC) measures and to create online communities that are free of bots and fake accounts.


Why we need to care about responsible AI in the age of the algorithm

The rapid pace of AI development does not appear to be slowing down. Breakthroughs come fast – quickly outpacing the speed of regulation. In the past year alone, we have seen a range of developments, from deep learning models that generate images from text, to large language models capable of answering any question you can think of. Although the progress is impressive, keeping pace with the potential harms of each new breakthrough can pose a relentless challenge. The trouble is that many companies cannot even see that they have a problem to begin with, according to a report released by MIT Sloan Management Review and Boston Consulting Group. ... Responsible AI is more than a check box exercise or the development of an add-on feature. Organizations will need to make substantial structural changes in anticipation of AI implementation to ensure that their automated systems operate within legal, internal and ethical boundaries.


Uncovering new opportunities with edge AI

Edge AI and edge ML present unique and complex challenges that require the careful orchestration and involvement of many stakeholders with a wide range of expertise from systems integration, design, operations and logistics to embedded, data, IT and ML engineering. Edge AI implies that algorithms must run in some kind of purpose-specific hardware ranging from gateways or on-prem servers on the high end to energy-harvesting sensors and MCUs on the low end. Ensuring the success of such products and applications requires that data and ML teams work closely with product and hardware teams to understand and consider each other’s needs, constraints and requirements. While the challenges of building a bespoke edge AI solution aren’t insurmountable, platforms for edge AI algorithm development exist that can help bridge the gap between the necessary teams, ensure higher levels of success in a shorter period of time, and validate where further investment should be made.


IT Automation vs. Orchestration: What's the Difference?

IT automation refers to the use of technology to automate tasks and processes that would otherwise be done by someone on your team. This includes everything from communication to security tasks. Today, the appeal of this automation is greater than it has ever been in the corporate world. One study shows that more than 30% of organizations have five or more departments that automate tasks. ... Orchestration is about coordinating tasks and processes into workflows. Orchestration is the process of automating and managing the end-to-end flow of IT services, from initial requests to final delivery. This can include everything from provisioning new servers to deploying applications and monitoring performance. The benefits of orchestration are similar to those of IT automation but they extend beyond simple task execution. They enable organizations to coordinate and manage complex workflows across multiple systems, tools, and teams. This improves efficiency and reduces the chance of errors on a larger scale.


Critical flaw in AI testing framework MLflow can lead to server and data compromise

MLflow is written in Python and is designed to automate machine-learning workflows. It has multiple components that ​​allow users to deploy models from various ML libraries; manage their lifecycle including model versioning, stage transitions and annotations; track experiments to record and compare parameters and results; and even package ML code in a reproducible form to share with other data scientists. MLflow can be controlled through a REST API and command-line interface. All these capabilities make the framework a valuable tool for any organization experimenting with machine learning. Scans using the Shodan search engine reinforce this, showing a steady increase of publicly exposed MLflow instances over the past two years, with the current count sitting at over 800. However, it's safe to assume that many more MLflow deployments exist inside internal networks and could be reachable by attackers who gain access to those networks.


Can Security Keep Up With ChatGPT Evolutions?

As with most technological developments, there are two sides of the coin. ChatGPT may present businesses with a never-ending pool of opportunities, but the same resource is available to those with more malicious intent. While ChatGPT itself cannot be directly targeted by cybersecurity threats like malware, hacking or phishing, it can be exploited to help criminals infiltrate systems more effectively. The platform’s developers have taken steps to try to reduce this as much as possible, but it takes just one attacker to word their question in the right way to get the desired response. The best example here is phishing. Asking the platform to generate a phishing template directly will result in the chatbot refusing. However, if someone with malicious intent rewrote their question ever so slightly, the AI won’t detect any issue. For example, if you ask it to create a ‘gophish’ template, it will comply. The advanced capabilities of ChatGPT throws up several red flags for security teams, but it isn’t time to hit the doomsday button just yet.


Creating Strong ROI for Multi-Cloud Solutions Through Compliance & Security

When it comes to budget, storing data in the cloud eliminates the need to pay upfront for physical hardware and services. Predictable subscription services fees without capital expenses means organizations can lower their overall costs and invest the savings in other areas that drive innovation. Take for example, a healthcare organization that moves its critical on premise infrastructure into the cloud. In doing so, the organization immediately saves enough on its capital expense budget to add much needed additional healthcare staff ready to serve patients. With regard to gaining intelligence, the data that can be gathered in a single or multi-cloud environment makes it infinitely easier to analyze and gain actionable insights that would otherwise be unavailable. This level of data-driven analytics and intelligence is powerful as it can be directly applied to customer service and operational performance improvements. Multi-cloud solutions also make scaling up and down to meet demand extremely simple and efficient. 


6 Myths About Leadership That May Be Holding You Back

While it is true that leaders often hold positions of authority and are responsible for making important decisions, leadership is not limited to those in formal leadership positions. Leadership can be demonstrated by anyone who takes the initiative, inspires others and creates positive change, regardless of their official role or title. Some of the most influential leaders do not hold formal leadership positions but still manage to influence others and make a difference. ... True leaders often face uncertain and unpredictable situations and may only sometimes have all the answers. In these situations, it's natural for a leader to feel some degree of uncertainty or doubt. The key difference between a leader and someone who appears confident is that a leader can acknowledge their limitations and vulnerabilities while still maintaining their focus and determination. They are not afraid to ask for help or admit when they don't know something. Leaders who are open and honest about their struggles can inspire greater trust and respect from their team. 


API Gateways: The Doorway to a Microservices World

While microservices are beneficial, they also create significant new challenges. These challenges include:Increased complexity: A microservices architecture introduces additional complexity, as each service needs to communicate with other services through well-defined interfaces. This can result in increased development and management overhead, as well as challenges with testing and debugging. Distributed systems management: A microservices architecture is a distributed system, which means it can be challenging to monitor and manage individual services, especially when there are multiple instances of the same service running in different environments. Data consistency: Maintaining data consistency across multiple services can be challenging, as changes to one service can impact other services that depend on that data. This requires careful planning and management to ensure that data remains consistent and up-to-date across the system.


An open data lakehouse will maintain and grow the value of your data

So here’s how to take advantage of all the data flowing through your organization’s digital transformation pipelines and bring together open-source systems and the cloud to maximize the utility of the data. Use an open data lakehouse designed to meld the best of data warehouses with the best of data lakes. That means storage for any data type, suitable for both data analytics and ML workloads, cost-effective, fast, flexible and with a governance or management layer that provides the reliability, consistency and security needed for enterprise operations. Keeping it “open” (using open-source technologies and standards like PrestoDB, Parquet and Apache HUDI) not only saves money on license costs, but also gives your organization the reassurance that the technology that backs these critical systems is being continuously developed by companies that use it in production and at scale. And as technology advances, so will your infrastructure. Remember, you’ve already invested mightily in data transformation initiatives to remain competitively nimble and power your long-term success.



Quote for the day:

"Leadership matters more in times of uncertainty." -- Wayde Goodall

Daily Tech Digest - March 25, 2023

The Speed Layer Design Pattern for Analytics

In a modern data architecture, speed layers combine batch and real-time processing methods to handle large and fast-moving data sets. The speed layer fills the gap between traditional data warehouses or lakes and streaming tools. It is designed to handle high-velocity data streams that are generated continuously and require immediate processing within the context of integrated historical data to extract insights and drive real-time decision-making. A “speed layer” is an architectural pattern that combines real-time processing with the contextual and historical data of a data warehouse or lake. A speed layer architecture acts as a bridge between data in motion and data at rest, providing a unified view of both real-time and historical data. ... The speed layer must provide a way to query and analyze real-time data in real time, typically using new breakthroughs in query acceleration such as vectorization. In a vectorized query engine, data is stored in fixed size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements.


7 steps for implementing security automation in your IT architecture

Security automation is often driven by the need to align with various industry regulations, best practices, and guidelines, as well as internal company policies and procedures. Those requirements, combined with constraints on the human resources available to accomplish them, make automation in this space critical to success. ... NIST defines a vulnerability as a "weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source." Vulnerability scanning is the process of leveraging automated tools to uncover potential security issues within a given system, product, application, or network. ... Compliance scanning is the process of leveraging automated tools to uncover misalignment concerning internal and external compliance. The purpose of compliance scanning is to determine and highlight gaps that may exist between legal requirements, industry guidance, and internal policies with the actual implementation of the given entity.


What an IT career will look like in 5 years

“We will see AI usage increase in software development and testing functions shifting the role of these employees” toward higher-level, personal-touch tasks, Huffman says. ... “An augmented workforce experience — across recruiting, productivity, learning, and more — will certainly be something to watch, as the level of trust that we will likely put in our AI colleagues may be surprising,” Bechtel says. “High confidence that AI is delivering the right analytics and insights will be paramount. To build trust, AI algorithms must be visible, auditable, and explainable, and workers must be involved in AI design and output. Organizations are realizing that competitive gains will best be achieved when there is trust in this technology.” Moreover, increased reliance on AI for IT support and development work such as entry-level coding, as well as cloud and system administration will put pressure on IT pros to up their skills in more challenging areas, says Michael Gibbs, CEO and founder of Go Cloud Careers.


Use zero-trust data management to better protect backups

Trust nothing, verify everything. "The principle is to never assume any access request is trustworthy. Never trust, always verify," said Johnny Yu, a research manager at IDC. "Applying [that principle] to data management would mean treating every request to migrate, delete or overwrite data as untrustworthy by default. Applying zero-trust in data management means having practices or technology in place that verify these requests are genuine and authorized before carrying out the request." Data backup software can potentially be accessed by bad actors looking to delete backup data or alter data retention settings. Zero-trust practices use multifactor authentication or role-based access control to help prevent stolen admin credentials or rogue employees from exploiting data backup software. "Zero-trust strategies remove the implicit trust assumptions of castle-and-moat architectures -- meaning that anyone inside the moat is trusted," said Jack Poller, a senior analyst at Enterprise Strategy Group. 


Improving CI/CD Pipelines Through Observability

Overall, observability in a CI pipeline is essential for maintaining the reliability and efficiency of the pipeline and allows developers to quickly identify and resolve any issues that may arise. It can be achieved by using a combination of monitoring, logging, and tracing tools, which can provide real-time visibility into the pipeline and assist with troubleshooting and root cause analysis. In addition to the above, you can also use observability tools such as Application Performance Management (APM) solutions like New Relic or Datadog. APMs provide end-to-end visibility of the entire application and infrastructure, which in turn gives the ability to identify bottlenecks, performance issues, and errors in the pipeline. It is important to note that, observability should be integrated throughout the pipeline, from development to production, to ensure that any issues can be identified and resolved quickly and effectively.


Diffusion models can be contaminated with backdoors, study finds

Chen and his co-authors found that they could easily implant a backdoor in a pre-trained diffusion model with a bit of fine-tuning. With many pre-trained diffusion models available in online ML hubs, putting BadDiffusion to work is both practical and cost-effective. “In some cases, the fine-tuning attack can be successful by training 10 epochs on downstream tasks, which can be accomplished by a single GPU,” said Chen. “The attacker only needs to access a pre-trained model (publicly released checkpoint) and does not need access to the pre-training data.” Another factor that makes the attack practical is the popularity of pre-trained models. To cut costs, many developers prefer to use pre-trained diffusion models instead of training their own from scratch. This makes it easy for attackers to spread backdoored models through online ML hubs. “If the attacker uploads this model to the public, the users won’t be able to tell if a model has backdoors or not by simplifying inspecting their image generation quality,” said Chen.


What is generative AI and its use cases?

Anticipating the AI endgame is an exercise with no end. Imagine a world in which generative technologies link with other nascent innovations, quantum computing, for example. The result is a platform capable of collating and presenting the best collective ideas from human history, plus input from synthetic sources with infinite IQs, in any discipline and for any purpose, in a split second. The results will be presented with recommended action points; but perhaps further down the line the technology will just take care of these while you make a cup of tea. There are several hurdles to leap before this vision becomes reality; for example, dealing with bias and the role of contested opinions, answering the question of whether we really want this, plus, of course, ensuring the safety of humankind, but why not? In the meantime, Rachel Roumeliotis, VP of data and AI at O’Reilly, predicts a host of near-term advantages for logic learning machines (LLMs). “Right now, we are seeing advancement in LLMs outpace how we can use it, as is sometimes the case with medicine, where we find something that works but don’t necessarily know exactly why. 


Iowa to Enact New Data Privacy Law: The Outlook on State and Federal Legislation

The emergence of more data privacy legislation is likely to continue. “It brings the US closer in line with trends we are seeing throughout the world as we have over 160 countries with data protection laws today,” says Dominique Shelton Leipzig, partner, cybersecurity and data privacy at global law firm Mayer Brown. These laws have notable impacts on the companies subject to them and consumers. “For companies, comprehensive privacy laws like these enshrine the existing practices of the privacy profession into law. These laws clarify that our minimum standards for privacy are not just best practices, but legally enforceable by state attorneys general,” says Zweifel-Keegan. While these laws shine a light on data privacy, many critics argue against the “patchwork” approach of state-by-state legislation. “The continuation of the current state-by-state trend means companies are increasingly complying with a complex and evolving patchwork of regulatory requirements. 


Tesla Model 3 Hacked in Less Than 2 Minutes at Pwn2Own Contest

One of the exploits involved executing what is known as a time-of-check-to-time-of-use (TOCTTOU) attack on Tesla's Gateway energy management system. They showed how they could then — among other things — open the front trunk or door of a Tesla Model 3 while the car was in motion. The less than two-minute attack fetched the researchers a new Tesla Model 3 and a cash reward of $100,000. The Tesla vulnerabilities were among a total of 22 zero-day vulnerabilities that researchers from 10 countries uncovered during the first two days of the three-day Pwn2Own contest this week. In the second hack, Synacktiv researchers exploited a heap overflow vulnerability and an out-of-bounds write error in a Bluetooth chipset to break into Tesla's infotainment system and, from there, gain root access to other subsystems. The exploit garnered the researchers an even bigger $250,000 bounty and Pwn2Own's first ever Tier 2 award — a designation the contest organizer reserves for particularly impactful vulnerabilities and exploits.


Leveraging the Power of Digital Twins in Medicine and Business

The digital twins that my team and I develop are high-fidelity, patient-specific virtual models of an individual’s vasculature. This digital representation allows us to use predictive physics-based simulations to assess potential responses to different physiological states or interventions. Clearly, it’s not feasible to try out five different stents in a specific patient surgically. Using a digital twin, however, doctors can test how various interventions would influence that patient and see the outcome before they ever step into the operating room. Patient-specific digital twins allow the doctors to interact with a digital replica of that patient’s coronary anatomy and fine-tune their approach before the intervention itself. The digital twin abstraction allows doctors to assess a wider range of potential scenarios and be more informed in their surgical planning process. Confirming accuracy is a critical component. In validating these models for different use cases, observational data must be measured and used to check the model predictions. 



Quote for the day:

"You don't lead by pointing and telling people some place to go. You lead by going to that place and making a case." -- Ken Kesey

Daily Tech Digest - March 24, 2023

Why CFOs Need to Evaluate and Prioritize Cybersecurity Initiatives

“CFOs should be aware of the increasing risks of cyber threats, including the potential impact on financial performance, reputation, and customer trust,” said Gregory Hatcher, a former U.S. special forces engineer and current founder of cybersecurity consulting firm White Knight Labs. “This includes both external cyber threats and the risk of insider threats posed by disgruntled employees or those with privileged access.” ... “The most commonly overlooked aspects of cybersecurity when transitioning to cloud operation and storage are the cloud provider’s security protocols and compliance requirements,” Hatcher said. He also mentioned the need for employee training on how to securely access and handle cloud data, as well as the potential risks of third-party integrations. Hatcher still recommends executives transfer data sets to the cloud, but with cybersecurity as a large consideration during the process.... “However, it’s essential to choose a reliable cloud provider and ensure compliance with data protection regulations. Keeping data in-house can be risky due to limited resources and potential vulnerabilities.”


Top ways attackers are targeting your endpoints

Vulnerabilities are made possible by bugs, which are errors in source code that cause a program to function unexpectedly, in a way that can be exploited by attackers. By themselves, bugs are not malicious, but they are gateways for threat actors to infiltrate organizations. These allow threat actors to access systems without needing to perform credential harvesting attacks and may open systems to further exploitation. Once they are within a system, they can introduce malware and tools to further access assets and credentials. For attackers, vulnerability exploitation is a process of escalation, whether through privileges on a device or by pivoting from one endpoint to other assets. Every endpoint hardened against exploitation of vulnerabilities is a stumbling block for a threat actor trying to propagate malware in a corporate IT environment. There are routine tasks and maintenance tools that allow organizations to prevent these vulnerabilities getting exploited by attackers.


Serverless WebAssembly for Browser Developers

A serverless function is designed to strip away as much of that “server-ness” as possible. Instead, the developer who writes a serverless function should be able to focus on just one thing: Respond to an HTTP request. There’s no networking, no SSL configuration, and no request thread pool management — all of that is handled by the platform. A serverless function starts up, answers one request and then shuts down. This compact design not only reduces the amount of code we have to write, but it also reduces the operational complexity of running our serverless functions. We don’t have to keep our HTTP or SSL libraries up to date, because we don’t manage those things directly. The platform does. Everything from error handling to upgrades should be — and, in fact, is — easier. ... As enticing as the programming paradigm is, though, the early iterations of serverless functions suffered from several drawbacks. They were slow to start. The experience of packaging a serverless function and deploying it was cumbersome.


How to embrace generative AI in your enterprise

Alongside the positive media coverage, the GPT limitations have been widely documented. This is partly due to their training on vast amounts of unverified internet data. Generative AI tools can potentially provide users with misleading or incorrect information, as well as biased and even harmful content. In fact, the developers of ChatGPT make their users aware of all these limitations on its website. Copyright and legal issues have also been raised. And even the introduction of the GPT-4 version, with more advanced algorithms and larger databases, enabling it to have a much better understanding of nuances and contexts, does not eliminate its flaws, as OpenAI CEO Sam Altman wrote on Twitter. Any enterprise looking to implement generative AI tools needs to have strategies in place to mitigate any limitations. The key to managing these is human supervision and control. Deploying a team of conversational designers/moderators overseeing what knowledge is searched and which GPT capabilities are used, gives control over what information is passed on to users. 


Will Cybersecurity Pros Feel Pressure as Hiring Cools?

“Regardless of the level of demand, though, my approach to hiring is the same,” he says. “I’m usually looking for the right mix of 'security-plus' people.” That means the right mix of core cybersecurity competencies, as well as some other experience in a related technical or compliance field. “It’s not enough to know just security,” he says. “We’re big on cybersecurity pros who aren’t afraid to go broad and get involved in the business aspects of their projects so they can relate to the teams they’ll be working with.” He says he recommend honing technical skills related to zero trust, cloud, automation -- and don’t forget soft skills like communications, project management, and leadership. “In many generalist security roles, people will be expected to cover a lot of ground and focusing on those soft skills can really set a candidate apart,” he says. Mika Aalto, co-founder and CEO at Hoxhunt, notes organizations are still hiring, but there is a lot more talent competing for the same jobs these days. 


Exploring the Exciting World of Generative AI: The Future is Now

Generative AI has the potential to have a huge impact on the economy and society in the coming decade. AI-powered tools can help us automate mundane tasks, freeing up more time for us to focus on more creative tasks. AI can also help us find new ways to solve problems, creating new jobs and opportunities. AI can also be used to create new products and services. AI-powered tools can help us create new products and services that are tailored to the needs of our customers. AI-powered tools can also help us make more informed decisions, allowing us to better understand our customers and their needs. A survey from the World Economic Forum predicted that by 2025, machines will eliminate 85 million jobs while also creating 97 million new employment roles. Shelly Palmer, a professor of advanced media at Syracuse University, says that jobs like middle managers, salespeople, writers and journalists, accountants and bookkeepers, and doctors who specialize in things like drug interactions are “doomed” when it comes to the possibility of AI being incorporated into their jobs.


Q&A: Univ. of Phoenix CIO says chatbots could threaten innovation

"Right now, it’s like a dark art — prompt engineering is closer to sorcery than engineering at this point. There are emerging best practices, but this is a problem anyways in having a lot of [unique] machine learning models out there. For example, we have a machine learning model that’s SMS-text for nurturing our prospects, but we also have a chatbot that’s for nurturing prospects. We’ve had to train both those models separately. "So [there needs to be] not only the prompting but more consistency in training and how you can train around intent consistently. There are going to have to be standards. Otherwise, it’s just going to be too messy. "It’s like having a bunch of children right now. You have to teach each of them the same lesson but at different times, and sometimes they don’t behave all that well. "That’s the other piece of it. That’s what scares me, too. I don’t know that it’s an existential threat yet — you know, like it’s the end of the world, apocalypse, Skynet is here thing. But it is going to really reshape our economy, knowledge work. It’s changing things faster than we can adapt to it."


New UK GDPR Draft Greatly Reduces Business Compliance Requirements

The Data Protection and Digital Information (No. 2) Bill would cut down on the types of records that UK businesses are required to keep. This could reduce the ability of data subjects to view, correct and request deletion of certain information; it would also likely make data breach reports less comprehensive and accurate, as businesses would not be required to keep as close of a watch on what they lost. ICO, the regulator for data breaches and privacy violations, would also be subject to review of its procedures by a new board composed of members the secretary of state appoints. This has raised the question of possible political interference in what is currently an independent body. This particular element could be a sticking point for keeping the UK GDPR equivalent with its EU counterpart for international data transfer purposes, however, as independent regulation has proven to be one of the key points in adequacy decisions. 


How to Navigate Strategic Change with Business Capabilities

Architects in the office of the CIO are often tasked to support senior management with decision-making to get transparency on business and IT transformation. Capability-based planning is a discipline that ensures the alignment of (IT) transformation to business strategy and provides a shared communication instrument aligning strategy, goals and business priorities to investments. Putting capabilities at the center of planning and executing business transformation helps the organization to focus on improving ‘what we do’ rather than jumping directly into the ‘how’ and specific solutions. In this way, capability-based planning helps to ensure we are not just doing things correctly but also focusing on ensuring that we are ‘doing the right things.’ Enterprise architecture practices are important in several stages of implementing capability-based planning. If you’re starting your journey or want to mature your practice, gain more knowledge from our eBook [Lankhorst et al., 2023]. As described in this eBook, our overall process for capability-based planning consists of 10 steps.


IT layoffs: 7 tips to develop resiliency

How did you get to where you are today? What stories have you created for yourself and the world? What skills have you gained? What kind of trust have you earned from people? Who would include you as someone who impacted them? Who had a major influence on your life and career? Many people mistakenly think they are indispensable: If we’re not there, a customer will be disappointed, a product release will be delayed, or a shipment delivery will be late. But the truth is, we are all dispensable. Come to terms with this fact and build your life and career around it. ... We all understand that technology changes rapidly (consider that just a few weeks ago, the world had never heard of ChatGPT). Use this downtime to take online courses on new topics and areas of interest – enroll in an art class, learn a musical instrument, or check out public speaking. There are many opportunities to venture into new areas that will expand your horizons for future work. When you add additional skills to your resume, you expand your thinking and possibilities. 



Quote for the day:

"Life is like a dogsled team. If you ain_t the lead dog, the scenery never changes." -- Lewis Grizzard

Daily Tech Digest - March 23, 2023

10 cloud mistakes that can sink your business

It’s a common misconception that cloud migration always leads to immediate cost savings. “In reality, cloud migration is expensive, and not having a full and complete picture of all costs can sink a business,” warns Aref Matin, CTO at publishing firm John Wiley & Sons. Cloud migration often does lead to cost savings, but careful, detailed planning is essential. Still, as the cloud migration progresses, hidden costs will inevitably appear and multiply. “You must ensure at the start of the project that you have a full, holistic cloud budget,” Matin advises. Cloud costs appear in various forms. Sometimes they’re in plain sight, such as the cost of walking away from an existing data facility. Yet many expenses aren’t so obvious. ... A major challenge facing many larger enterprises is leveraging data spread across disparate systems. “Ensuring that data is accessible and secure across multiple environments, on-premises as well as on applications running in the cloud, is an increasing headache,” says Darlene Williams, CIO of software development firm Rocket Software.


Developed countries lag emerging markets in cybersecurity readiness

The drastic difference in cybersecurity preparedness between developed and developing nations is likely because organizations in emerging markets started adopting digital technology more recently compared to their peers in developed markets. “That means many of these companies do not have legacy systems holding them back, making it relatively easier to deploy and integrate security solutions across their entire IT infrastructure,” the report said, adding that technology debt — the estimated cost or assumed impact of updating systems — continues to be a major driver of the readiness gap. The Cisco Cybersecurity Readiness Index categorizes companies in four stages of readiness — beginner, formative, progressive, and mature. ... Identity management was recognized as the most critical area of concern. Close to three in five respondents, or 58% of organizations, were either in the formative or beginner category for identity management. However, 95% were at least at some stage of deployment with an appropriate ID management application, the report said.


Observability will transform cloud security

Is this different than what you’re doing today for cloud security? Cloud security observability may not change the types or the amount of data you’re monitoring. Observability is about making better sense of that data. It’s much the same with cloud operations observability, which is more common. The monitoring data from the systems under management is mostly the same. What’s changed are the insights that can now be derived from that data, including detecting patterns and predicting future issues based on these patterns, even warning of problems that could emerge a year out. ... Cloud security observability looks at a combination of dozens of data streams for a hundred endpoints and finds patterns that could indicate an attack is likely to occur in the far or near future. If this seems like we are removing humans from the process of making calls based on observed, raw, and quickly calculated data, you’re right. We can respond to tactical security issues, such as a specific server under attack, with indicating alerts, which means it should block the attacking IP address.


Operational Resilience: More than Disaster Recovery

Disaster recovery is fairly narrow in its definition and typically viewed in a small timeframe. Operational resilience is much broader, including aspects like the sort of governance you’ve put in place; how you manage operational risk management; your business continuity plans; and cyber, information, and third-party supplier risk management. In other words, disaster recovery plans are chiefly concerned with recovery. Operational resilience looks at the bigger picture: your entire ecosystem and what can be done to keep your business operational during disruptive events. ... Part of the issue is that cyber is still seen as special. The discussion always seems to conclude with the assumption that the security team or IT department is managing a particular risk, so no one else needs to worry about it. There is a need to demystify cybersecurity. It’s only with the proper business understanding and risk ownership that you can put proper resilience mechanisms in place.


Nvidia builds quantum-classical computing system with Israel’s Quantum Machines

The DGX Quantum deploys Nvidia’s Grace Hopper superchip and its technology platform for hybrid quantum-classical computers coupling so-called graphics processing units (GPUs) and quantum processing units (QPUs) in one system. It is supported by Quantum Machine’s flagship OPX universal quantum control system designed to meet the demanding requirements of quantum control protocols, including precision, timing, complexity, and ultra-low latency, according to the Israeli startup. The combination allows “researchers to build extraordinarily powerful applications that combine quantum computing with state-of-the-art classical computing, enabling calibration, control, quantum error correction and hybrid algorithms,” Nvidia said in a statement. Tech giants like Google, Microsoft, IBM, and Intel are all racing to make quantum computing more accessible and build additional systems, while countries like China, the US, Germany, India, and Japan are also pouring millions into developing their own quantum abilities.


Leveraging Data Governance to Manage Diversity, Equity, and Inclusion (DEI) Data Risk

In organizations with a healthy data culture, the counterpart to compliance is data democratization. Democratization is the ability to make data accessible to the right people at the right time in compliance with all relevant legal, regulatory, and contractual obligations. Leaders delegate responsibility to stewards for driving data culture by democratizing data so that high-quality data is available to the enterprise in a compliant manner. Such democratized data enables frontline action by placing data into the hands of people who are solving business problems. Stewards democratize data by eliminating silos and moving past the inertia that develops around sensitive data sources. An essential aspect of democratization, therefore, is compliance. Stewards will not be able to democratize data without a clear ability to assess and manage risk associated with sensitive data. That said, it is critical that DEI advocates limit democratization of DEI data, especially at the outset of their project or program. 


The Future of Data Science Lies in Automation

Much data science work is done through machine learning (ML). Proper employment of ML can ease the predictive work that is most often the end goal for data science projects, at least in the business world. AutoML has been making the rounds as the next step in data science. Part of machine learning, outside of getting all the data ready for modeling, is picking the correct algorithm and fine-tuning (hyper)parameters. After data accuracy and veracity, the algorithm and parameters have the highest influence on predictive power. Although in many cases there is no perfect solution, there’s plenty of wiggle room for optimization. Additionally, there’s always some theoretical near-optimal solution that can be arrived at mostly through calculation and decision making. Yet, arriving at these theoretical optimizations is exceedingly difficult. In most cases, the decisions will be heuristic and any errors will be removed after experimentation. Even with extensive industry experience and professionalism, there is just too much room for error.


What NetOps Teams Should Know Before Starting Automation Journeys

Like all people, NetOps professionals enjoy the results of a job well done. So, while the vision of their automation journey may be big, it’s important to start with a small, short-term project that can be completed quickly. There are a couple of benefits to this approach:Quick automation wins will give NetOps teams confidence for future projects. Projects like this can generate data and feedback that NetOps teams can convert into learnings and insights for the next project. This approach can also be applied to bigger, more complex automation projects. Instead of taking on the entire scale of the project at once, NetOps teams can break it down into smaller components. ... The advantages of this approach are the same as with the quick-win scenario: There is a better likeliness of success and more immediate feedback and data to guide the NetOps teams through this entire process. Finally, as talented as most NetOps teams are, they are not likely to have all of the automation expertise in-house at any given time. 


Reducing the Cognitive Load Associated with Observability

Data points need to be filtered and transformed in order to generate the proper signals. Nobody wants to be staring at a dashboard or tailing logs 24/7, so we rely on alerting systems. When an alert goes off, it is intended for human intervention, which means transforming the raw signal into an actionable event with contextual data: criticality of the alert, environments, descriptions, notes, links, etc. It must be enough information to direct the attention to the problem, but not too much to drown in noise. Above all else, a page alert should require a human response. What else could justify interrupting an engineer from their flow if the alert is not actionable? When an alert triggers, analysis begins. While we eagerly wait for anomaly detection and automated analysis to fully remove the human factor from this equation, we can use a few tricks to help our brains quickly identify what’s wrong. ... Thresholds are required for alert signals to trigger. When it comes to visualization, people who investigate and detect anomaly need to consider these thresholds too. Is this value in data too low or unexpectedly high?


The Urgent Need for AI in GRC and Security Operations: Are You Ready to Face the Future?

Another area where AI tools are transforming the IT industry is security operations. Businesses face an ever-increasing number of cyberthreats, and it can be challenging to stay ahead of these threats. AI tools can help by automating many security operations, such as threat detection and incident response. They can also help with risk assessment by analyzing large amounts of data and identifying potential vulnerabilities. The benefits of AI tools in the IT industry are clear. By automating processes and improving decision-making, businesses can save time and money while reducing the risk of errors. AI tools can also help businesses to be more agile and responsive to changes in the market. However, the use of AI tools in the IT industry also presents some challenges. One of the key challenges is the need for specialized technical expertise. While AI tools can be user-friendly, businesses still need to have specialized expertise to use the tools effectively.



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - March 21, 2023

CFO Priorities This Year: Rethinking the Finance Function

Marko Horvat, Gartner VP of research, adds CFOs must transition away from optimization and start thinking about transformation. “Making things faster, more accurate, and with less effort has benefits, but each round of improvement brings diminishing returns,” he says. “CFOs must start thinking about ways to transform the function to build and enhance capabilities, such as advanced data and analytics, in order to truly unlock more value from the finance function.” Sehgal says CFOs should be asking questions including, how do we create a futuristic vision for finance? Should short-term gains override longer-term benefits? And how do we fund digital transformation with the current pressures? “CFOs are focused on elevating the role of finance in the organization to be a value integrator across the enterprise, as well as enhancing value through new strategies that not only support development but that also promote innovations for capital allocation,” he explains.


Build Software Supply Chain Trust with a DevSecOps Platform

When building an application, developers, platform operators and security professionals want to monitor vulnerabilities throughout the software supply chain. The challenge comes when multiple vulnerability scanners are used at different stages in the pipeline and different teams are notified and required to take action without proper coordination. A security-focused application platform can build in scan orchestration to not only detect vulnerabilities but also to map those findings to a workload. This feature allows developers to identify issues throughout the life cycle of their applications and help them resolve issues, shifting left the responsibility with a higher degree of automation. Moreover, the platform can build trust with security analysts by showing the performance of application developers and helping them understand the risk that teams are facing. Once a platform detects these vulnerabilities, both at build time and at runtime, it needs to help developers triage and remediate them. 


Developers, unite! Join the fight for code quality

Writing good code is a craft as much as any other, and should be regarded as such. You have every right to advocate for an environment and an operational model that respect the intricacies of what you do and the significance of the outcome. It’s important to value, and feel valued for, what you do. And not just for your own immediate happiness—it’s also a long-term investment in your career. Making things you don’t think are any good tends to wear on the psyche, which doesn’t exactly feed into a more motivated workday. In fact, a study conducted by Oxford University’s Saïd Business School found that happy workers were 13% more productive. What’s good for your craft is ultimately best for business—a conclusion both engineers and their employers can feel good about. Software plays a big role at just about every level of society—it’s how we create and process information, access goods and services, and entertain ourselves. With the advent of software-defined vehicles, it even determines how we move between physical locations.


Why data literacy matters for business success

Aligning data strategies with overall business strategy and operations is no mean feat. Chief Data Officers (CDOs) are ideal candidates in marrying together data analytics and the wider business, given their appreciation of informed decision making, and the desire to foster a data culture where internal information is properly managed and engaged with throughout the organisation. Moreover, their understanding of the technology landscape will assist when making platform and software selections. This stands to benefit all departments, who’ll gain access to the tools and skills needed to work with data and derive insights. CDOs also embody the “can do” approach to professional development, believing it’s possible to train employees in data-related skills, regardless of their technical proficiency. There’s a well-established correlation between hiring a CDO and business success, with research from Forrester suggesting 89% of organisations harnessing analytics to improve operations that appointed one to oversee the process have seen a positive business impact.


What the 'new automation' means for technology careers

AI is already playing a part in handling technology tasks. A survey released by OpsRamp finds more than 60% of companies adopting AIOps, which applies AI to monitor and improve IT operations themselves. The greatest IT operations challenge for enterprises in 2023 was automating as many operations as possible, cited by 66% of respondents. The main benefits of AIOps seen so far include reduction in open incident tickets (65%); reduction in mean time to detect or restore (56%), and automation of tedious tasks (52%). The latest IT staffing data from Janco Associates finds recent layoffs affected data center and operations staff, with business leaders looking to automate IT processes and reporting. The apparent trend here is that those pursuing careers in technology need to look higher up the stack -- at applications and business consulting. However, there's still a lot of work for people working with the plumbing and code. Unfortunately, getting to automation-driven abstraction -- especially if it involves AI -- requires some manual work up front.


How Cybersecurity Delays Critical Infrastructure Modernization

For critical infrastructure organizations, building a security strategy that works from both an operational technology (OT) and consumer data perspective is not as straightforward as it is in many other industries. Safely storing this data while implementing the latest technology has proved to be a significant challenge across the sector, meaning the service provided by these companies is being hampered. These concerns have prevented a range of technologies from being integrated quickly or at all. These technologies include renewable energy projects, electric vehicle technology, natural disaster contingencies and moving towards smarter grid solutions to replace aging infrastructure. Older operational technology becomes difficult to update and secure sufficiently while the use of third-party software also reduces the level of control organizations have over their data. In addition to this, a lack of automation increases the chances of human error, which could present opportunities to cybercriminals.


What Are Foundation AI Models Exactly?

The generative AI solution can analyze input data against 175 billion parameters and profoundly understand the written language. The smart tool can answer questions, summarize and translate text, produce articles on a given topic, write code, and much more. All you need is to provide ChatGPT with the right prompts. OpenAI’s groundbreaking product is just one example of foundation models that transform AI application development as we know it. Foundation models disrupt AI development as we know it. Instead of training multiple models for separate use cases, you can now leverage a pre-trained AI solution to enhance or fully automate tasks across multiple departments and job functions. With foundation AI models like ChatGPT, companies no longer have to train algorithms from scratch for every task they want to enhance or automate. Instead, you only need to select a foundation model that best fits your use case – and fine-tune its performance for a specific objective you’d like to achieve.


As hiring freezes and layoffs hit, tech teams struggle to do more with less

There are a number of organizational hurdles holding back employees’ learning and development, Pluralsight found. For HR and L&D directors, budget restraints and costs were identified as the biggest barriers to upskilling (30%). This was also true for technology leaders, with 15% blaming financial restraints for getting in the way of employee upskilling. For technology workers themselves, finding time to invest in their own training was identified as the main issue: 42% of workers said they were too busy to upskill, with 18% saying their manager didn’t allow any time during the week to learn new skills. As a result, 21% of tech workers feel pressured to learn outside of work hours. ... However, the report added that giving employees time to invest in their training, address skills gaps and gain valuable growth opportunities are key factors in retention. “Upskilling during work hours will hinder short-term productivity, and managers often bear the brunt of this stress. But don’t sacrifice short-term productivity for long-term success,” the report said.


CISA kicks off ransomware vulnerability pilot to help spot ransomware-exploitable flaws

CISA says it will seek out affected systems using existing services, data sources, technologies, and authorities, including CISA's Cyber Hygiene Vulnerability Scanning. CISA initiated the RVWP by notifying 93 organizations identified as running instances of Microsoft Exchange Service with a vulnerability called "ProxyNotShell," widely exploited by ransomware actors. The agency said this round demonstrated "the effectiveness of this model in enabling timely risk reduction as we further scale the RVWP to additional vulnerabilities and organizations." Eric Goldstein, executive assistant director for cybersecurity at CISA, said, "The RVWP will allow CISA to provide timely and actionable information that will directly reduce the prevalence of damaging ransomware incidents affecting American organizations. We encourage every organization to urgently mitigate vulnerabilities identified by this program and adopt strong security measures consistent with the U.S. government's guidance on StopRansomware.gov."


A Simple Framework for Architectural Decisions

Technology Radar captures techniques, platforms, tools, languages and frameworks, and their level of adoption across an organization. However, this may not cover all the needs. Establishing consistent practices for things that apply across different parts of the system can be helpful. For example, you might want to ensure all logging is done in the same format and with the same information included. Or, if you’re using a REST API, you might want to establish some conventions around how it should be designed and used, like what headers to use or how to name things. Additionally, if you’re using multiple similar technologies, it can be useful to guide when to use each one. Technology Standards define the rules for selecting and using technologies within your company. They ensure consistency, reduce the risk of adopting new technology in a suboptimal way, and drive consistency across the organization.



Quote for the day:

"Leadership is not about titles, positions, or flow charts. It is about one life influencing another." -- John C. Maxwell