Daily Tech Digest - March 09, 2023

Understanding Data Security Posture Management for Protecting Cloud Data

To help organizations protect their data from data loss, a new approach emerged in 2022 in the form of data security posture management (DSPM). Today it is proving to be a critical tool for effective data security because of its laser focus on the data layer. DSPM allows organizations to identify all their sensitive data, monitor and identify risks to business-critical data, and remediate and protect that information. To get a better handle on this new approach and what it does, let’s consider what DSPM is not. ... DSPM’s ability to autonomously discover, monitor, and remediate risk creates an effective tool for an organization’s security posture. Beyond that, your DSPM solution of choice needs to operate in a manner that doesn’t require deployment of agents everywhere. Your DSPM should be easy to get up and running and allow you to quickly realize benefits by mining meaningful amounts of data to deliver visibility into what's going on within your environment from a risk perspective. DSPM solutions are proven to deliver accurate results and offer significant ROI for organizations.


Arctic Wolf CEO on Incident Response, M&A, Cyber Insurance

Many organizations struggle with preparing for a security incident even if they have an internal security team and have procured cyber insurance, Schneider says. Businesses often haven't prepared their systems or documented escalation paths or how their environment is set up, which makes it nearly impossible to quickly get information over to an incident response provider in the event of an attack, Schneider says. "The less time that you're spending on compiling information, the more time you're able to spend on remediating the threat and the less time you've taken between an incident occurring and the beginning of a response," Schneider says. Most companies don't know what they need to have documented or prepared in the event of a security incident and therefore end up reaching out to their insurance provider or incident responder while an attack is taking place to see what questions they have, Schneider says. Although the answers to these questions are relatively static, he says it takes a lot of time to gather the information needed to respond


UK government introduces revised data reform bill to Parliament

“Co-designed with business from the start, this new bill ensures that a vitally important data protection regime is tailored to the UK’s own needs and our customs,” said science, innovation and technology secretary Michelle Donelan. “Our system will be easier to understand, easier to comply with, and take advantage of the many opportunities of post-Brexit Britain. No longer will our businesses and citizens have to tangle themselves around the barrier-based European GDPR [General Data Protection Regulation]. “Our new laws release British businesses from unnecessary red tape to unlock new discoveries, drive forward next-generation technologies, create jobs and boost our economy.” The government added the revised bill will also support increased international trade without creating extra costs for businesses already compliant with existing data protection rules, as well as boost public confidence in the use of artificial intelligence (AI) technologies by clarifying the circumstances in which safeguards apply to automated decision-making.


Municipal CISOs grapple with challenges as cyber threats soar

"The diversity of our business services and the corresponding diversity of systems is unparalleled in that no organization does what our municipal government does," Michael Makstman, CISO for the City and County of San Francisco and co-chair of the Coalition of City CISOs, tells CSO. "We fly planes, we pave roads, we provide public safety services," Makstman says. "We operate one of the largest, if not the largest, trauma centers on the West Coast. We support many legal professionals for some of the largest legal firms in the country. At the same time, we make sure that vulnerable populations have access to food and care. We have an outstanding municipal transportation network. We have buses and subways and our world-famous cable car." ... CISOs of municipal organizations of all sizes are required to deftly handle the politics of the governments they serve and the individual service providers themselves, Hamilton says. CISOs are not always welcomed into agencies that do not directly employ them.


Decoding Digital Twins: Exploring the 6 main applications and their benefits

Although the roots of digital twins go back to NASA’s Apollo program in 1970, the concept of creating digital replicas of physical assets and visualizing/simulating/predicting in a virtual world is extremely suitable for companies that are trying to make Industry 4.0 a reality or are aiming toward future industrial metaverse projects. Make no mistake: While the definition of a digital twin may be straightforward, its applications are numerous. In 2020, we published our first market research on the topic and showcased that there may, in fact, be 200 or more different types of digital twins. The feedback we received from you was that classification helps to ensure apple-to-apple digital twin comparisons, but questions remain about the hotspots of activity. Therefore, as part of our new 233-page Digital Twin Market Report 2023-2027, we classified 100 real digital twin projects along the three dimensions and found six main areas of activity. These six digital twin application hotspots cover two thirds of all digital twin projects we analyzed.


Cloud trends 2023: Cost management surpasses security as top priority

For the first time, since Flexera began its annual survey of cloud decision-makers, security was not the top challenge reported by respondents. As revealed in the Flexera 2023 State of the Cloud Report, released on March 8, 2023, 82% of respondents from across all organizations indicated that their top cloud challenge is managing cloud spend, edging out security at 79%.These shifting challenges may be the result of organizations becoming increasingly comfortable with cloud security, while needing to manage the greater spend associated with their increased reliance on cloud services. Lack of resources or expertise was reported as a top cloud challenge by 78% of respondents, making it the third major cloud challenge for today’s businesses. ... Cloud cost management responsibilities are often spread across teams within an organization. Year over year, vendor management and finance or accounting teams have less responsibility for cloud expenses. Instead, initiatives are shifting to finops teams. Finops, the practice of cloud cost management, is a growing priority. 


Why IT communications fail to communicate

If you prefer to communicate via documentation — and encourage everyone in your organization to follow suit — four facets of communication are getting in your way. Language: Every natural language, be it English, Latin, or even Esperanto, is imprecise at best. Synonyms are approximate, not exact; words are defined by other words, leading us down the path of infinite recursion; different people bring different vocabularies and assumptions to their attempts to interpret what they’re reading. ... Disambiguation: No matter how even the best writers might try, they’ll never create a document that’s completely free of ambiguity and entangled logic. In making the attempt, many find themselves trudging along the literary path of a different profession for which ambiguity and the likelihood of misinterpretation are equally problematic ... Disagreements: No matter how well a business analyst (going back to our app dev example) describes their design, the stakeholders they’ve worked with to create it aren’t always going to agree on all points. Stakeholder disagreements unavoidably turn into design compromises and, worse, inconsistent specifications.


Cloud Native Testing Trends for 2023

Testing in a cloud native environment can be challenging, as it involves testing across multiple platforms and services, using a diverse set of tools that can vary greatly across teams and workflows. The distributed nature of cloud native applications means that testing must be performed on a larger scale, with more components to be tested. DevOps teams must also consider the impact of the underlying infrastructure on testing, as changes to the infrastructure can affect the behavior of the application. To overcome these challenges, organizations are adopting a cloud native testing strategy that incorporates automation and integrates testing into the development process. ... DevOps engineers are increasingly taking ownership of testing, and tools like Testkube can help them easily integrate testing into their workflows. By taking a collaborative approach to testing, DevOps engineers can ensure that testing is done throughout the development life cycle, reducing the risk of bugs slipping through to production.


Stress-Test Your Software to Prevent a Southwest-Type Calamity

Stress tests typically subject a software system to very large workloads in the form of a high volume of requests or a high rate of failure in individual components. “The idea is to simulate a worst-case scenario with potentially unpredictable side effects,” Padhye says. Testing reveals how a system will react to slowdowns, memory leaks, security issues, and data corruption. “Across performance-based testing, stress tests must be paired with load tests,” Feloney advises. “For example, spike tests examine how a system will fare under sudden, high ramp-up traffic, and soak tests examine the system’s sustainability over a long period.” Stress tests can either be performed in an isolated environment designed for quality purposes, or directly on the live customer-facing deployment. “While it sounds scary, testing a live deployment is far more representative of a real extreme scenario, because it also incorporates the human factor presented by users responding to the simulated events in a hard-to-predict way,” Padhye explains.


Innovating in an economic downturn: 4 tips

During a downturn, you may lose the ability to hire full-time employees but still have things to do and room in your budget. Finance might be more open to a capital expense than an operational expense during these times. This is a perfect opportunity to bring in outside help to take care of your distractions so your team can spend time and energy on innovation. Distractions take a lot of time and effort but aren’t core to what an organization does. For example, organizations today spend a lot of time supporting their applications and systems. As a result, many choose to hire outside firms to handle these activities so that their internal teams can focus on innovation and projects that grow their top line. ... Sometimes you simply don’t have internal resources with an invention mindset or experience innovating. Consultants can help fill the gap, facilitating discussions that drive innovation and partnering with your teams to show them how to work through the innovation process. External experts provide a critical outside perspective and facilitate conversations that drive meaningful innovation.



Quote for the day:

"Leadership without mutual trust is a contradiction in terms." -- Warren Bennis

Daily Tech Digest - March 08, 2023

How AI can help find new employees

AI-based recruitment platforms can find "more diverse talent pools, and [offer] a more accurate approach to qualifying candidates by matching skills rather than on a job title match or other signal,” said Forrester Principal Analyst Betsy Summers. Some of the use cases for talent acquisition platforms are efficiency-oriented, since they’re used for interview scheduling, managing the candidate application process, assisting recruiters with follow-ups, and managing the applicant pipeline. Other platforms also focus on bias mitigation such as adjusting language in job descriptions and candidate communications to be more inclusive. Still others include remote video capabilities that automate early interviews. ... Chatbots are typically employed by recruitment platforms to engage job seekers and ask them about their interests and skills; the bots can then present candidates with open positions for which they’re most qualified to apply.


The EU digital strategy: The impact of data privacy on global business

First, companies may need to assess the impact of the EU digital strategy on their business and their business model and need to identify where changes are required and where additional care needs to be taken with respect to current processes. This specifically applies to the four acts concerning data governance, digital services, AI, and data. Second, companies may need to investigate the possibilities for the applicability of the acts within their organization. This includes possible access to markets that other competitors led in the past through their access to end-user data. Finally, as the EU digital strategy continues to evolve, organizations may be able to further collaborate with governing bodies on the interpretation of the regulations. Specifically, in the case of AI, there are several companies that may find it very challenging to work with their current model in the new guidance. ... Additionally, companies should revisit their current processes for data collection and AI. 


5 best practices for scaling AI in the enterprise

One of the most important challenges of implementing AI is defining the business problem the enterprise is trying to solve. As the saying goes, don’t end up with an answer that’s looking for a question. Simply deploying new forms of technology isn’t the right approach. Next, examine the issues and determine if AI is the best way to tackle the problem. There are other digital technologies well adapted to simple problems. To help ensure success, define the business issue clearly and determine what course to take at the outset — some may not need AI. In automation, the end-to-end process is disaggregated and divided into smaller parts. Each part is then digitized, and the parts are then reaggregated into the value chain. ... So, AI-based transformation is as much about designing a new operating model, cross-skilling the workforce and integrating it into upstream and downstream processes as it is about neural nets and model management. It’s important to note that AI in the enterprise is 20% about technology and 80% about people, processes and data.


Data Privacy: A Public Policy Challenge

In today’s world, improved computational capabilities have enabled businesses and public and private organizations to better structure their data in the form of huge databases and leverage analytics to generate business intelligence and contribute value creation. With these computational and analytical capabilities, there are increasing avenues to develop profiles of humans’ behavior around their purchasing, spending and consumption habits, their genetic profiling, their travel history, medical history, etc. While these capabilities add value to the human society, they also come with risks of intruding into individuals’ privacy. Unfortunately, the discourse around personal data is only centered around its protection from leakage or prevention from breach. However, the primary objective to safeguard the personal data is to ensure that such data are not processed to create a more inequitable society and bring about unfair outcomes. The amount of discrete data available today allows us to bring more nuance and innovations into public policy, therefore aiding in ironing out any imbalances within the society.


Why Database Administrators Are Rising in Prominence

“Currently, there’s a very disjointed relationship between DBAs and the business problem they are solving for customers,” Neiweem says. He points out DBAs are often the last touch point for customers, but this is changing as business and marketing leaders glean deeper insights from customer data and look to achieve personalization at scale. “It’s no longer effective to go through this disconnected channel to get answers about customer data,” he says. “DBAs are now moving into a consulting role where they can take data, analyze and action it, enabling marketing and other internal teams to build stronger relationships with customers through those data insights.” Arun Chandrasekaran, product manager for ManageEngine, adds DBAs are often the first link in the chain of acquiring IT tools. “While the decision-makers decide on what to buy, DBAs can influence their decision,” he says. “Since the responsibility of managing the data warehouse falls on DBAs, they work with the stakeholders to understand the business requirements.


Designing For Data Flow

Put simply, the bottlenecks in designs are being defined by the type and volume of data, and the speed at which it needs to be processed. “SoCs are getting bigger and more complex, fitting everything in the actual chip,” he said. “So data exchange, which used to happen at a system level, is now happening within the IC. This means efficient circuit design for data transfer is required to achieve the overall expected performance. The data flow design at the logic level is quite abstract. In the past, the chips were smaller and mostly driven by specific functionality, so there were only a few stages required to plan for data flow. With bigger chips, this has changed, and more effort is needed to understand the data sampling and placement of the appropriate functional modules next to each other, to achieve optimal data flow.” Data integrity also is becoming a challenge. In addition to crosstalk and various types of noise, which are prevalent at advanced nodes, there are a variety of aging effects that can appear over longer lifetimes, thermal mismatch between increasingly heterogeneous components, and latent defects that can become real defects as the amount of processing required on a chip or in a package increases.


Interacting with Machines through IoT and AI: A Revolution in Home and Workplace Technology

The seamless integration of IoT and AI has completely revolutionized the way we interact with machines, offering novel and innovative solutions for both homes and workplaces alike. With the aid of cutting-edge technologies like machine learning, deep learning algorithms, gesture control, and wearable devices, the potential of IoT and AI to create value across a range of applications is colossal. As these technologies continue to advance, the potential for further groundbreaking advancements in the future is undeniable. It is my sincere hope that this blog has been informative and engaging, offering you valuable insight into the current and future state of these two fields. That said, it is also important to remain cognizant of the potential ethical and privacy concerns that come with their widespread adoption. As with any rapidly-evolving technology, it is essential that we consider and address these concerns to ensure that the development and application of these technologies align with our societal values and principles.


How Skyscanner Embedded a Team Metrics Culture for Continuous Improvement

Changing Culture was probably the part that we put the most effort into, because we recognised that any mis-steps could be misinterpreted as us peering over folks’ shoulders, or even worse, using these metrics intended to signal improvement opportunities to measure individual performance. Either of those would be strongly against the way that we work in Skyscanner, and would have stopped the project in its tracks, maybe even causing irreversible damage to the project’s reputation. To that end we created a plan that focused on developing a deep understanding of the intent with our engineering managers before introducing the tool. This plan focused on a bottom-up rollout approach, based on small cohorts of squad leads. Each cohort was designed to be around 6 or 7 leads, with a mix of people from different tribes, different offices, and different levels of experiences, covering all our squads. The smaller groups would increase accountability, because it’s harder to disengage in a small group, and also create a safe place where people can share their ideas, learnings, and concerns.

Managing data is the key to better citizen services

As important as cyber-resilience is, there are also other issues associated with the unchecked growth of a data estate. Top of mind for public sector CIOs is keeping an eye of the purse-strings and being accountable to taxpayers for the money they spend. Massive amounts of data cost a similarly large amount of funding to maintain, says Mr Hatchuel. “You have to put data somewhere and managing the cost is very challenging for CIOs.” A modern data protection and management solution allows CIOs to manage their data estates in a cost-effective way, as well as keep it secure. The solution should also protect and manage external data sources which, in a contemporary environment, could be a new public cloud service. “Data can pop up anywhere, so you need a holistic solution able to look across the whole data estate and manage and understand different data sources. CIO’s are also advised to understand the Shared Responsibility model of most public cloud services; for the majority of providers, that burden falls to the customer.” 


4 ways for CIOs to strike a balance between operation and innovation

“Striking the right balance between innovation and operations is essential for any organization to succeed and stay competitive. Innovation is about exploring new ideas, embracing change, and striving for progress. On the other hand, operations consist of taking those ideas and making them a reality, efficiently utilizing resources, and ensuring that all the necessary steps are in place to deliver the desired result. ... “The load that IT organizations carry with snowballing technical debt has a direct and tangible drain on IT innovation. While it’s obvious on the surface, every dollar spent on technical debt is a dollar that IT cannot invest in innovation and transformation. Maintaining, securing, and operating critical but aging applications and infrastructure is a boat anchor that drags down innovation and must be addressed continuously by IT leadership, architects, and CTOs before it blows up in a disaster. Start by eliminating the 'kick the can down the road' strategy of ignoring technical debt; instead, prioritize actual application modernization investments that can break the pattern and open up innovation cycles as part of a continuous modernization strategy.”



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - March 07, 2023

The four qualities of resilient teams

The first quality is team confidence, or the belief that the team can handle just about anything that comes its way. Team confidence, the authors note, isn’t really the sum of a lot of individual confidence, for swollen egos don’t benefit the team. The goal is collective and mutual confidence. And not too much, because overconfidence undermines success. “Moderately high confidence offers a healthy balance of confidence and caution,” the authors write. To build team confidence, managers are urged to make goals and processes clear, empower the team by encouraging members to participate in decision-making, cheer successes, and provide useful feedback during struggles. The second quality is having the foresight to create a teamwork road map, or a plan that “reflects the extent to which all team members know what their own roles and responsibilities are, and the extent to which they agree on what all other team members’ roles and responsibilities are. Team members may even know how to perform one another’s roles so that at any point, one person can step in for another.”


What is zero trust? A model for more effective security

Removing that implicit trust takes time, according to experts, and most organizations are far from accomplishing that objective. “It’s a journey of change,” says Chalan Aras, a member of the Cyber & Strategic Risk practice at Deloitte Risk & Financial Advisory. Zero trust is also a collection of policies, procedures, and technologies. Organizations that want to implement an effective zero-trust strategy must have an accurate inventory of assets, including data. They must have an accurate inventory of users and devices as well as a robust data classification program with privileged access management in place, Valenzuela says. Other components include comprehensive identity management, application-level access control, and micro-segmentation. Another important element is user and entity behavior analytics, which uses automation and intelligence to learn normal (and therefore accepted and trusted) user and entity behaviors from anomalous behaviors that shouldn’t be trusted and therefore denied access.


Will ChatGPT make low-code obsolete?

Unlike technologies of the past which typically automate or speed-up a repetitive process (manufacturing, logistics, transportation etc.), ChatGPT does something entirely new – enhancing the creativity of the user. While we can debate whether this is true creativity or not, ultimately if the outcome is the same, is it not still creative? Think of how ChatGPT could help a software developer crack a particularly challenging piece of code, or how it could optimise existing code. It can also help developers be more creative by reducing the repetitive/boring part of their jobs so they can focus on the parts they love, leaving them more time to flex their creative muscles. Going beyond the developer use case, and ChatGPT has the ability to democratise coding itself by providing a way for non-coders to develop applications themselves – in much the same way that low-code promises, but on steroids. This “democratisation of IT” promises a new wave of innovation by enabling organisations to create new processes without the new to engage with IT at all. ChatGPT could achieve the same outcome as low-code but in half the time.


SBOMs should be a security staple in the software supply chain

NIST's standard includes multiple elements, from the software component used and its supplier to version numbers and access to the component's repository. Version levels must be evaluated against release levels, potential threats found, and risks determined. "Unwinding large applications, from open-source operating systems, to in-house developed applications, to third-party 'shrink-wrapped' stacks is fraught with contextual challenges, inventory methods, and manual verification, all of which are prone to error," Masserini writes. While the process of identifying and reporting issues is codified, "it does not address the issue of manually maintaining such an inventory and consistently validating its contents," he says. Automation must be put into every step of the process, from generating and publishing SBOMs to ingesting them – and then bring vulnerability remediation into their current app security programs without having to adopt new workflows, Lambert says. There are other considerations. SBOMs deliver a lot of information, but organizations need to decide how they're going to use it. 


Digital twins could be the key to successful automation

The primary advantage of the digital twin is that it evolves as automation evolves. As a result, if any changes are applied to the automation in the RPA platform, those same changes are reflected in the twin, ideally in real-time or at least near real-time. Operational metrics are also accessible and displayed where the twin resides so that it can be monitored and continuously improved. Beyond changes and operational metrics, a digital twin in automation enables an organization to compile accurate documentation and detailed audit trails for the entire automation estate and maintain it in a single, centralized repository. Doing so not only addresses the problem of misplaced or lost process design documents, but also solves one of the major pain points of automating: An inability to visualize and understand how automations have changed over time. Maintaining digital twins for all automations in a central location — regardless of the RPA platform in which they are designed, deployed and orchestrated — vastly improves automation standardization, governance and visibility.


Stepping up: Becoming a high-potential CEO candidate

Stanford University economics professor Nicholas Bloom, who’s spent his career researching CEOs, describes the reality he’s observed: “It’s frankly a horrible job. I wouldn’t want it. Being a CEO of a big company is a hundred-hour-a-week job. It consumes your life. It consumes your weekend. It’s super stressful. Sure, there’re enormous perks, but it’s also all encompassing.” Reinforcing the point, Microsoft CEO Satya Nadella describes the job as “24/7.” His late mentor Bill Campbell, who had been a CEO three times and was an influential coach to several technology industry leaders, would often remind him, “No one has ever lived to outwork the job. It will always be bigger than you.” Many CEOs secretly agree that the best job in the world is actually the one right below the CEO. There the spotlight burns less brightly, yet the opportunities to make a difference are great, as are the rewards. Without the right motivations and expectations, not only will you find that the effort required to be CEO outweighs any personal gain, but you will also be less likely to succeed. As CCHMC’s Fisher puts it, 


Enterprise IT moves forward — cautiously — with generative AI

The technology also needs human oversight. “Systems like ChatGPT have no idea what they’re authoring, and they’re very good at convincing you that what they’re saying is accurate, even when it’s not,” says Cenkl. There’s no AI assurance — no attribution or reference information letting you know how it came up with its response, and no AI explainability, indicating why something was written the way it was. “You don’t know what the basis is or what parts of the training set are influencing the model,” he says. “What you get is purely an analysis based on an existing data set, so you have opportunities for not just bias but factual errors.” Wittmaier is bullish on the technology, but still not sold on customer-facing deployment of what he sees as an early-stage technology. At this point, he says, there’s short-term potential in the office suite environment, customer contact chatbots, help desk features, and documentation in general, but in terms of safety-related areas in the transportation company’s business, he adds, the answer is a clear no.


Career paths for devops engineers and SREs

Solving business challenges today requires multidisciplinary teams and integrated solutions. If you enjoy problem-solving, shift to other organizational roles and develop broader perspectives on what’s required to deliver end-to-end solutions. One opportunity for developers is to shift to data science and machine learning roles. Tiago Cardoso, a product manager at Hyland, says, “Career paths for developers have become much more flexible and individualized, and I’m seeing a lot of new developer roles appearing, such as data engineers, ML engineers, ML architects, and MLops engineers. He adds, “Common career paths for those in devops and SREs include positions such as systems administrator, infrastructure engineer, and cloud architect.” ... Architect roles and responsibilities vary considerably from one organization to another, but successful architects are more than just technical experts. Architects scale their expertise by helping agile teams learn, apply, and create self-organizing standards around using technology to deliver business solutions.


Zero-Day Vulnerabilities Can Teach Us About Supply-Chain Security

Writing, testing and validating whether a fix will resolve a vulnerability can take trial and error. By definition, zero-days don’t have a patch, meaning it can often be days before developers can even begin the process of patching their applications. Furthermore, software needs to go through QA cycles before a true fix is identified. This is why security controls are necessary for blocking malicious activity before it reaches runtime. Additionally, developers must analyze their software development life cycle (SDLC) and augment it before a vulnerability is announced. An asset or application inventory should be a mandatory component so that when a vulnerability is disclosed, organizations know who owns the application and who to contact. ... Securing third-party or commercial-off-the-shelf software is one of the biggest cybersecurity challenges facing every organization. Unfortunately, most vendors don’t disclose the components and libraries that make up their software, making it difficult for organizations to know whether a vulnerability affects them once it’s disclosed.


Five Factors That Turn CISOs into Firefighters

When a CISO is referred to as a “firefighter,” it typically means that they are spending a significant amount of time responding to security incidents and putting out fires rather than being able to focus on proactively preventing those incidents from occurring in the first place. Here are some reasons why a CISO may become a firefighter: 1. Lack of resources: A CISO may not have sufficient resources (e.g., budget, staff, or technology) to implement a comprehensive cybersecurity program effectively. This can lead to security incidents that require a reactive response. 2. Insufficient risk management: A CISO may not have a robust risk management program in place, which means that security incidents are more likely to occur. Without proper risk management, a CISO may be caught off guard by security incidents and have to react quickly to mitigate the damage. 3. Lack of security awareness: Employees may not be properly trained on cybersecurity best practices, which can lead to security incidents such as phishing attacks or malware infections. ...



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - March 06, 2023

Computer says no. Will fairness survive in the AI age?

A number of risks fall outside of these existing laws and regulations, so while lawmakers might wrestle with the far-reaching ramifications of AI, other industry bodies and other groups are driving the adoption of guidance, standards and frameworks - some of which might become standard industry practice even without the enforcement of law. One illustration is the US' National Institute of Standards and Technology's AI risk management framework, which is intended "for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems". ... Bias is one particularly important element. The algorithms at the centre of AI decision making may not be human, but they can still imbibe the prejudices which hue human judgement. Thankfully, policymakers in the EU appear to be alive to this risk. The bloc's draft EU Artificial Intelligence Act addressed a range of issues on algorithmic bias, arguing technology should be developed to avoid repeating “historical patterns of discrimination” against minority groups, particularly in contexts such as recruitment and finance.


12 programming mistakes to avoid

Some say that a good programmer is someone who looks both ways when crossing a one-way street. But, like playing it fast and loose, this tendency can backfire. Software that is overly buttoned up can slow your operations to a crawl. Checking a few null pointers may not make much difference, but some code is just a little too nervous, checking that the doors are locked again and again so that sleep never comes. ... Scaling well is a challenge and it is often a mistake to overlook the ways that scalability might affect how the system runs. Sometimes, it’s best to consider these problems during the early stages of planning, when thinking is more abstract. Some features, like comparing each data entry to another, are inherently quadratic, which means your optimizations might grow exponentially slower. Dialing back on what you promise can make a big difference. Thinking about how much theory to apply to a problem is a bit of a meta-problem because complexity often increases exponentially. Sometimes the best solution is careful iteration with plenty of time for load testing.


EV Charging Infrastructure Offers an Electric Cyberattack Opportunity

The risks are not just theoretical: A year ago, after Russia invaded Ukraine, hacktivists compromised charging stations near Moscow to disable them and display their support for Ukraine and their contempt for Russian President Vladamir Putin. ... In many ways, EV charging infrastructure represents a perfect storm of technologies. The devices are connected via mobile applications and carry the same risks as other IoT devices, but they're also set to become a critical part of transportation network in the United States, like other operational technology (OT). And because EV charging stations must be connected to public networks, ensuring that their communications are encrypted will be critical to maintaining the security of the devices, says Dragos' Tonkin. "Hacktivists will always be looking for poorly secured devices on public networks, it's important that the owners of EV put in place controls to ensure they are not easy targets," he says. "The crown jewels of the operators of EV chargers have to be their central platforms, the chargers themselves intrinsically trust the instructions pushed down from the center."


Can WebAssembly Solve Serverless’s Problems?

Wasm computing structure is designed in such a way that it has “shifted” the potential of the serverless landscape, Butcher said. This is due, he said, to WebAssemby’s nearly instant startup times, small binary sizes, and platform and architectural neutrality, as Wasm binaries can be executed with a fraction of the resources required to run today’s serverless infrastructure. “Contrasted with heavyweight [virtual machines] and middleweight containers, I like to think of Wasm as the lightweight cloud compute platform,” he noted. “Developers package up only the bare essentials: a Wasm binary and perhaps a few supporting files. And the Wasm runtime takes care of the rest.” An immediate benefit of relying on Wasm’s runtime for serverless is lower latency, especially when extending Wasm’s reach not only beyond the browser but away from the cloud. This is because it can be distributed directly to and on edge devices with relatively low data-to-transfer and computing overhead.


Tracking device technology: A double-edged sword for CISOs

Clearly, the logistics side of the equation means vehicles and things can be tagged and tracked with relative ease. Not only will it help with locating and counting inventory, but the technology can also be used to ensure an alert occurs when those things which are supposed to stay within a specific geographic footprint leave that footprint. Then there is the negative side of the equation, on which employees might use the corporate tracking capability for nefarious purposes or bring their own tracking devices into the corporate environment. But don’t stop with the employee. What of the vendor or the competition? How might they wish to use these tracking devices to garner a bit of competitive intelligence? Tracking the movements of gear or people might be prudent in a specific circumstance — visitors to a corporate building, for example. A badge outfitted with the technology can be monitored to ensure visitors stay within the areas to which they are granted access and, if escorts are required, an escort tag can be issued to provide confirmation that their corporate escort is within proximity.


US Official Reproaches Industry for Bad Cybersecurity

Easterly specifically called out Google's August 2022 debut of Android 13, which was the first Android release in which a majority of the new code added to the release was in a memory-safe language. Easterly said there wasn't a single memory safety vulnerability discovered in the Rust code added to Android 13. Open-source software community Mozilla created Rust in 2015 and currently has a project to integrate Rust into its Firefox web browser. Amazon Web Services has begun to build critical services in Rust, which Easterly said has resulted in both security benefits as well as time and cost savings for the public cloud behemoth. Making memory-safe languages ubiquitous within universities will serve as a building block to companies migrating their key libraries to memory-safe languages, Easterly said. This effort hinges on the technology industry containing, and eventually rolling back, the prevalence of C and C++ in key systems. C and C++ are still written and taught due to the belief that migrating away from them would harm performance.


A key post-quantum algorithm may be vulnerable to side-channel attacks

Quantum computers have the potential to crack the cryptographic algorithms in use today, which is why “post-quantum” cryptographic algorithms are designed to be so strong that they can survive huge leaps in computing power. A team in Sweden, however, says it’s possible to attack some of the new algorithms with other methods. Researchers at the KTH Royal Institute say they found a vulnerability in a specific implementation of CRYSTALS-Kyber — a “quantum safe” algorithm that the U.S. National Institute of Standards and Technology has selected as part of its potential standards for future cryptographic systems. According to the Swedish team, CRYSTALS-Kyber is vulnerable to side-channel attacks, which use information leaked by a computer system to gain unauthorized access or extract sensitive information. Instead of trying to guess a secret key, a side-channel technique analyzes data such as small variations in power consumption or electromagnetic radiation to reconstruct what the machine is doing and find clues that would enable access.


How to achieve and shore up cyber resilience in a recession

With cybercriminals waiting in the wings, concerns about whether it’s a false economy to make cuts in cybersecurity investments is a growing concern. However, investing in expensive security tools will be ineffective if organizations neglect putting the right foundational security practices in place. When it comes to elevating organizational resilience, CIOs don’t need to choose between savings and safety. By reviewing processes, revisiting the basics, making the most of existing resources, and focusing on internal training, organizations can increase their security and digital resilience. Selectively deploying cybersecurity tools and product kits can then complement these good practices in a highly cost-effective way. In a downturn, it pays to reset cybersecurity priorities and review how and where finite resources can best be deployed. Unfortunately, all too often organizations conflate good security practices with good security purchases, in the misbegotten belief that, somehow, it’s possible to “buy security”.


Companies can’t stop using open source

Freely downloadable code has never been truly free (as in cost). The bits might be free, but there’s a cost to manage those bits. Developers always cost more than the code they write or manage. This may be one reason that when enterprises were asked what they most value in “open source leadership,” they responded with “makes it easy to deploy my preferred open source software in the cloud.” Companies increasingly want the benefits of open source without the expense of managing it themselves. ... Despite these problems and despite open source costs, even those who think open source is more expensive than proprietary alternatives say its benefits outweigh those costs. Chesbrough, when conducting the survey for the Linux Foundation, asked about this seemingly counterintuitive finding. “If you think [open source is] more expensive, why are you still using it?” he asked one respondent. Their response? “The code is available.” Meaning, “If we were to construct the code ourselves, that would take some amount of time. ...”


Do you have the courage of your convictions?

A courageous leader also has a healthy appreciation for the fact that sticking your neck out carries the risk of being wrong or failing. Many CEOs and senior leaders are looking to promote managers who have failed and can show they have learned from the experience. They want leaders who take big swings and, if they stumble, figure out what went wrong. But still, we’re all too prone to put up facades of invincibility and perfection, polishing resumes that show a smooth trajectory and consistent record of success. In job interviews, candidates are unwilling to acknowledge any failures or weaknesses beyond the predictable non-answers of “I work too hard” or “I care too much.” “People who don’t make bad decisions are indecisive and risk-averse,” said David Kenny, who was CEO of the Weather Company when I interviewed him years ago (he now runs market research firm Nielsen). “I love hiring people who’ve failed. We’ve got some great people here with some real flameouts.



Quote for the day:

"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong

Daily Tech Digest - March 05, 2023

Transforming transformation

Transformation has been a way of extracting value rather than re-invention. Financial services companies are particularly guilty of this. For example, in banking, digital has been a way of reducing costs by moving the “business of banking” into the hands of the end customer – hence why we all do things ourselves that the bank used to do for us. This focus on cost reduction has meant that processes have been optimised for the digital age at the expense of true innovation. The days of extracting value are almost over for the financial services industry. There are not many places left to reduce costs. So, they must become value creators, which means taking a leaf out of the digital giants’ book and finding ways of identifying and solving problems. ... But, according to Paul Staples, who was, until recently, head of embedded banking for HSBC, success will not be determined by technology but by the proposition, approach, and processes that the banks wrap around it. Pain points and value must be identified up front, forming the basis of what gets delivered.


Five Megatrends Impacting Banking Forever

The first megatrend impacting banking is the democratization of data and insights. More than ever, data is being collected everywhere, and it is the lifeblood of any financial institution. The democratization of data and insights refers to the process of making data and insights accessible to a wider audience, including both employees and customers. ... The explosion of hyper-personalization is driven by the use of significantly larger amounts of data, such as browsing and purchase history, interests and preferences, demographics and even survey information. With advanced technologies that include facial recognition, augmented reality and conversational AI, it is now possible to also offer customers highly personalized experiences that cater to their unique delivery preferences – in near real-time. ... Traditionally, banks and credit unions have viewed their relationship with consumers as a series of transactions. However, in recent years, there has been an increasing focus on providing a seamless and integrated engagement opportunity that can result in a more stable and long-term relationship. 


Understanding the Role of DLT in Healthcare

Finding actual healthcare circumstances where this DLT technology could be useful and relevant is crucial. Instead of implementing a solution without first identifying an issue to answer, organizations must take into account any current requirements or challenges that the technology may help address. Organizations employing this technology must be aware of and receptive to the new organizational paradigms that go along with these solutions. Recognizing the paradigm shift to decentralized, distributed solutions is essential to evaluating this technology. ... In shared ledgers, the validity and consistency of which are maintained by nodes using a variety of processes, including consensus mechanisms, protecting the secrecy of data entail ensuring that only authorized access is granted to data. Institutions are employing a multi-layered strategy for blockchain in healthcare, using private blockchains where all of the linked healthcare organizations are well-known and trusted.


Control the Future of Data with AI and Information Governance

“The average company manages hundreds of terabytes of data. For that data to prove an asset rather than a liability, it must be located, classified, cleansed, and monitored. With so much data entering the organization so quickly from so many disparate sources, conducting those data tasks manually is not feasible.” “For organizations to make accurate data-driven decisions, decision makers need clean, reliable data. By the same token, AI-powered analysis will only prove useful if based on complete and accurate data sets. That requires visibility into all relevant data. And it requires exhaustive checks for errors, duplicates, and outdated information.” “An important aspect of information governance includes data security. Privacy regulations, for example, require that organizations take all reasonable measures to keep confidential data safe from unauthorized access. This includes ensuring against inappropriate sharing and applying encryption to sensitive information.”


BI solution architecture in the Center of Excellence

Designing a robust BI platform is somewhat like building a bridge; a bridge that connects transformed and enriched source data to data consumers. The design of such a complex structure requires an engineering mindset, though it can be one of the most creative and rewarding IT architectures you could design. In a large organization, a BI solution architecture can consist of: Data sources; Data ingestion; Big data / data preparation; Data warehouse; BI semantic models; and Reports. At Microsoft, from the outset we adopted a systems-like approach by investing in framework development. Technical and business process frameworks increase the reuse of design and logic and provide a consistent outcome. They also offer flexibility in architecture leveraging many technologies, and they streamline and reduce engineering overhead via repeatable processes. We learned that well-designed frameworks increase visibility into data lineage, impact analysis, business logic maintenance, managing taxonomy, and streamlining governance. 


When finops costs you more in the end

Don’t overspend on finops governance. The same can be said for finops governance, which controls who can allocate what resources and for what purposes. In many instances, the cost of the finops governance tools exceeds any savings from nagging cloud users into using fewer cloud services. You saved 10%, but the governance systems, including human time, cost way more than that. Also, your users are more annoyed as they are denied access to services they feel they need, so you have a morale hit as well. Be careful with reserved instances. Another thing to watch out for is mismanaging reserved instances. Reserved instances are a way to save money by committing to using a certain number of resources for a set period. But if you’re not optimizing your use of them, you may end up spending more than you need to. Again, the cure is worse than the disease. You’ve decided that using reserved instances, say purchasing cloud storage services ahead of time at a discount, will save you 20% each year. However, you have little control over demand, and if you end up underusing the reserved instances, you still must pay for resources that you didn’t need.


Core Wars Shows the Battle WebAssembly Needs to Win

So the basics are that you have two or more competing programs, running in a virtual space and trying to corrupt each other with code. In summary:The assembler-like language is called Redcode. Redcode is run by a program called MARS. The competitor programs are called “warriors” and are written in Redcode, managed by MARS. The basic unit is not a byte, but an instruction line. MARS executes one instruction at a time, alternatively for each “warrior” program. The core (the memory of the simulated computer), or perhaps “battlefield”, is a continuous wrapping loop of instruction lines, initially empty except for the competing programs, which are set apart. Code is run and data stored directly on these lines. Each Redcode instruction contains three parts: the operation itself (OpCode), the source address and the destination address. ... While in modern chips, code moves through parallel threads in mysterious ways, the Core War setup is still pretty much the basics of how a computer works. However code is written it, we know it ends up as a set of machine code instructions.


Data Fear Looms As India Embraces ChatGPT

Considering the vast amounts of data that OpenAI has amassed without permission—enough that there is a chance that ChatGPT will be trained on blog posts, product reviews, articles and more—its privacy policy raises legitimate concerns. The IP address of visitors, their browser’s type and settings, and the information about how visitors interact with the websites—such as the kind of content they engage with, the features they use, and the actions they take—are all collected by OpenAI in accordance with its privacy policy. Additionally, it compiles information on the user’s website and time-based browsing patterns. OpenAI also states that it may share users’ personal information with unspecified third parties without informing them to meet its business objectives. The lack of clear definitions for terms such as ‘business operation needs’ and ‘certain services and functions’ in the company’s policies creates ambiguity regarding the extent and reasoning for data sharing. To add to the concerns, OpenAI’s privacy policy also states that the user’s personal information may be used for internal or third-party research and could potentially be published or made publicly available.


Booking.com's OAuth Implementation Allows Full Account Takeover

While researchers only divulged how they used OAuth to compromise Booking.com in the report, they discovered other sites with risk from improperly applying the authentication protocol, Balmas tells Dark Reading. "We have observed several other instances of OAuth flaws on popular websites and Web services," he says. "The implications of each issue vary and depends on the bug itself. In our cases, we are talking about full account takeovers across them all. And there are surely many more that are yet to be discovered." OAuth provides an easy solution to bypass the user login process for site owners, reducing friction for which is a "long and frustrating" problem, Balmas says. However, though it seems simple, implementing the technology successfully and securely is actually very complicated in terms of proper technical implementation, and a single small wrong move can have a huge security impact, he says. "To put it in other words — it is very easy to put a working social login functionality on a website, but it is very hard to do it correctly," Balmas tells Dark Reading.


More automation, not just additional tech talent, is what is needed to stay ahead of cybersecurity risks

Just over three-quarters of CISOs believe that their limited bandwidth and lack of resources has led to important security initiatives falling to the wayside, and nearly 80% claimed they have received complaints from board members, colleagues or employees that security tasks are not being handled effectively. ... Stress is also having an impact on hiring. 83% of the CISOs surveyed admitted they have had to compromise on the staff they hire to fill gaps left by employees who have quit their job. “I’ve never tried harder in my career to keep people than I have in the past few years,” said Rader. “It’s so key to hang onto good talent because without those people you’re always going to be stuck focusing on operations instead of strategy.” But there are solutions — and it’s not just finding more talent, says George Tubin, director of product marketing at Cynet. He said CISOs want more automated tools to manage repetitive tasks, better training, and the ability to outsource some of their work.



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup