Daily Tech Digest - March 08, 2023

How AI can help find new employees

AI-based recruitment platforms can find "more diverse talent pools, and [offer] a more accurate approach to qualifying candidates by matching skills rather than on a job title match or other signal,” said Forrester Principal Analyst Betsy Summers. Some of the use cases for talent acquisition platforms are efficiency-oriented, since they’re used for interview scheduling, managing the candidate application process, assisting recruiters with follow-ups, and managing the applicant pipeline. Other platforms also focus on bias mitigation such as adjusting language in job descriptions and candidate communications to be more inclusive. Still others include remote video capabilities that automate early interviews. ... Chatbots are typically employed by recruitment platforms to engage job seekers and ask them about their interests and skills; the bots can then present candidates with open positions for which they’re most qualified to apply.


The EU digital strategy: The impact of data privacy on global business

First, companies may need to assess the impact of the EU digital strategy on their business and their business model and need to identify where changes are required and where additional care needs to be taken with respect to current processes. This specifically applies to the four acts concerning data governance, digital services, AI, and data. Second, companies may need to investigate the possibilities for the applicability of the acts within their organization. This includes possible access to markets that other competitors led in the past through their access to end-user data. Finally, as the EU digital strategy continues to evolve, organizations may be able to further collaborate with governing bodies on the interpretation of the regulations. Specifically, in the case of AI, there are several companies that may find it very challenging to work with their current model in the new guidance. ... Additionally, companies should revisit their current processes for data collection and AI. 


5 best practices for scaling AI in the enterprise

One of the most important challenges of implementing AI is defining the business problem the enterprise is trying to solve. As the saying goes, don’t end up with an answer that’s looking for a question. Simply deploying new forms of technology isn’t the right approach. Next, examine the issues and determine if AI is the best way to tackle the problem. There are other digital technologies well adapted to simple problems. To help ensure success, define the business issue clearly and determine what course to take at the outset — some may not need AI. In automation, the end-to-end process is disaggregated and divided into smaller parts. Each part is then digitized, and the parts are then reaggregated into the value chain. ... So, AI-based transformation is as much about designing a new operating model, cross-skilling the workforce and integrating it into upstream and downstream processes as it is about neural nets and model management. It’s important to note that AI in the enterprise is 20% about technology and 80% about people, processes and data.


Data Privacy: A Public Policy Challenge

In today’s world, improved computational capabilities have enabled businesses and public and private organizations to better structure their data in the form of huge databases and leverage analytics to generate business intelligence and contribute value creation. With these computational and analytical capabilities, there are increasing avenues to develop profiles of humans’ behavior around their purchasing, spending and consumption habits, their genetic profiling, their travel history, medical history, etc. While these capabilities add value to the human society, they also come with risks of intruding into individuals’ privacy. Unfortunately, the discourse around personal data is only centered around its protection from leakage or prevention from breach. However, the primary objective to safeguard the personal data is to ensure that such data are not processed to create a more inequitable society and bring about unfair outcomes. The amount of discrete data available today allows us to bring more nuance and innovations into public policy, therefore aiding in ironing out any imbalances within the society.


Why Database Administrators Are Rising in Prominence

“Currently, there’s a very disjointed relationship between DBAs and the business problem they are solving for customers,” Neiweem says. He points out DBAs are often the last touch point for customers, but this is changing as business and marketing leaders glean deeper insights from customer data and look to achieve personalization at scale. “It’s no longer effective to go through this disconnected channel to get answers about customer data,” he says. “DBAs are now moving into a consulting role where they can take data, analyze and action it, enabling marketing and other internal teams to build stronger relationships with customers through those data insights.” Arun Chandrasekaran, product manager for ManageEngine, adds DBAs are often the first link in the chain of acquiring IT tools. “While the decision-makers decide on what to buy, DBAs can influence their decision,” he says. “Since the responsibility of managing the data warehouse falls on DBAs, they work with the stakeholders to understand the business requirements.


Designing For Data Flow

Put simply, the bottlenecks in designs are being defined by the type and volume of data, and the speed at which it needs to be processed. “SoCs are getting bigger and more complex, fitting everything in the actual chip,” he said. “So data exchange, which used to happen at a system level, is now happening within the IC. This means efficient circuit design for data transfer is required to achieve the overall expected performance. The data flow design at the logic level is quite abstract. In the past, the chips were smaller and mostly driven by specific functionality, so there were only a few stages required to plan for data flow. With bigger chips, this has changed, and more effort is needed to understand the data sampling and placement of the appropriate functional modules next to each other, to achieve optimal data flow.” Data integrity also is becoming a challenge. In addition to crosstalk and various types of noise, which are prevalent at advanced nodes, there are a variety of aging effects that can appear over longer lifetimes, thermal mismatch between increasingly heterogeneous components, and latent defects that can become real defects as the amount of processing required on a chip or in a package increases.


Interacting with Machines through IoT and AI: A Revolution in Home and Workplace Technology

The seamless integration of IoT and AI has completely revolutionized the way we interact with machines, offering novel and innovative solutions for both homes and workplaces alike. With the aid of cutting-edge technologies like machine learning, deep learning algorithms, gesture control, and wearable devices, the potential of IoT and AI to create value across a range of applications is colossal. As these technologies continue to advance, the potential for further groundbreaking advancements in the future is undeniable. It is my sincere hope that this blog has been informative and engaging, offering you valuable insight into the current and future state of these two fields. That said, it is also important to remain cognizant of the potential ethical and privacy concerns that come with their widespread adoption. As with any rapidly-evolving technology, it is essential that we consider and address these concerns to ensure that the development and application of these technologies align with our societal values and principles.


How Skyscanner Embedded a Team Metrics Culture for Continuous Improvement

Changing Culture was probably the part that we put the most effort into, because we recognised that any mis-steps could be misinterpreted as us peering over folks’ shoulders, or even worse, using these metrics intended to signal improvement opportunities to measure individual performance. Either of those would be strongly against the way that we work in Skyscanner, and would have stopped the project in its tracks, maybe even causing irreversible damage to the project’s reputation. To that end we created a plan that focused on developing a deep understanding of the intent with our engineering managers before introducing the tool. This plan focused on a bottom-up rollout approach, based on small cohorts of squad leads. Each cohort was designed to be around 6 or 7 leads, with a mix of people from different tribes, different offices, and different levels of experiences, covering all our squads. The smaller groups would increase accountability, because it’s harder to disengage in a small group, and also create a safe place where people can share their ideas, learnings, and concerns.

Managing data is the key to better citizen services

As important as cyber-resilience is, there are also other issues associated with the unchecked growth of a data estate. Top of mind for public sector CIOs is keeping an eye of the purse-strings and being accountable to taxpayers for the money they spend. Massive amounts of data cost a similarly large amount of funding to maintain, says Mr Hatchuel. “You have to put data somewhere and managing the cost is very challenging for CIOs.” A modern data protection and management solution allows CIOs to manage their data estates in a cost-effective way, as well as keep it secure. The solution should also protect and manage external data sources which, in a contemporary environment, could be a new public cloud service. “Data can pop up anywhere, so you need a holistic solution able to look across the whole data estate and manage and understand different data sources. CIO’s are also advised to understand the Shared Responsibility model of most public cloud services; for the majority of providers, that burden falls to the customer.” 


4 ways for CIOs to strike a balance between operation and innovation

“Striking the right balance between innovation and operations is essential for any organization to succeed and stay competitive. Innovation is about exploring new ideas, embracing change, and striving for progress. On the other hand, operations consist of taking those ideas and making them a reality, efficiently utilizing resources, and ensuring that all the necessary steps are in place to deliver the desired result. ... “The load that IT organizations carry with snowballing technical debt has a direct and tangible drain on IT innovation. While it’s obvious on the surface, every dollar spent on technical debt is a dollar that IT cannot invest in innovation and transformation. Maintaining, securing, and operating critical but aging applications and infrastructure is a boat anchor that drags down innovation and must be addressed continuously by IT leadership, architects, and CTOs before it blows up in a disaster. Start by eliminating the 'kick the can down the road' strategy of ignoring technical debt; instead, prioritize actual application modernization investments that can break the pattern and open up innovation cycles as part of a continuous modernization strategy.”



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - March 07, 2023

The four qualities of resilient teams

The first quality is team confidence, or the belief that the team can handle just about anything that comes its way. Team confidence, the authors note, isn’t really the sum of a lot of individual confidence, for swollen egos don’t benefit the team. The goal is collective and mutual confidence. And not too much, because overconfidence undermines success. “Moderately high confidence offers a healthy balance of confidence and caution,” the authors write. To build team confidence, managers are urged to make goals and processes clear, empower the team by encouraging members to participate in decision-making, cheer successes, and provide useful feedback during struggles. The second quality is having the foresight to create a teamwork road map, or a plan that “reflects the extent to which all team members know what their own roles and responsibilities are, and the extent to which they agree on what all other team members’ roles and responsibilities are. Team members may even know how to perform one another’s roles so that at any point, one person can step in for another.”


What is zero trust? A model for more effective security

Removing that implicit trust takes time, according to experts, and most organizations are far from accomplishing that objective. “It’s a journey of change,” says Chalan Aras, a member of the Cyber & Strategic Risk practice at Deloitte Risk & Financial Advisory. Zero trust is also a collection of policies, procedures, and technologies. Organizations that want to implement an effective zero-trust strategy must have an accurate inventory of assets, including data. They must have an accurate inventory of users and devices as well as a robust data classification program with privileged access management in place, Valenzuela says. Other components include comprehensive identity management, application-level access control, and micro-segmentation. Another important element is user and entity behavior analytics, which uses automation and intelligence to learn normal (and therefore accepted and trusted) user and entity behaviors from anomalous behaviors that shouldn’t be trusted and therefore denied access.


Will ChatGPT make low-code obsolete?

Unlike technologies of the past which typically automate or speed-up a repetitive process (manufacturing, logistics, transportation etc.), ChatGPT does something entirely new – enhancing the creativity of the user. While we can debate whether this is true creativity or not, ultimately if the outcome is the same, is it not still creative? Think of how ChatGPT could help a software developer crack a particularly challenging piece of code, or how it could optimise existing code. It can also help developers be more creative by reducing the repetitive/boring part of their jobs so they can focus on the parts they love, leaving them more time to flex their creative muscles. Going beyond the developer use case, and ChatGPT has the ability to democratise coding itself by providing a way for non-coders to develop applications themselves – in much the same way that low-code promises, but on steroids. This “democratisation of IT” promises a new wave of innovation by enabling organisations to create new processes without the new to engage with IT at all. ChatGPT could achieve the same outcome as low-code but in half the time.


SBOMs should be a security staple in the software supply chain

NIST's standard includes multiple elements, from the software component used and its supplier to version numbers and access to the component's repository. Version levels must be evaluated against release levels, potential threats found, and risks determined. "Unwinding large applications, from open-source operating systems, to in-house developed applications, to third-party 'shrink-wrapped' stacks is fraught with contextual challenges, inventory methods, and manual verification, all of which are prone to error," Masserini writes. While the process of identifying and reporting issues is codified, "it does not address the issue of manually maintaining such an inventory and consistently validating its contents," he says. Automation must be put into every step of the process, from generating and publishing SBOMs to ingesting them – and then bring vulnerability remediation into their current app security programs without having to adopt new workflows, Lambert says. There are other considerations. SBOMs deliver a lot of information, but organizations need to decide how they're going to use it. 


Digital twins could be the key to successful automation

The primary advantage of the digital twin is that it evolves as automation evolves. As a result, if any changes are applied to the automation in the RPA platform, those same changes are reflected in the twin, ideally in real-time or at least near real-time. Operational metrics are also accessible and displayed where the twin resides so that it can be monitored and continuously improved. Beyond changes and operational metrics, a digital twin in automation enables an organization to compile accurate documentation and detailed audit trails for the entire automation estate and maintain it in a single, centralized repository. Doing so not only addresses the problem of misplaced or lost process design documents, but also solves one of the major pain points of automating: An inability to visualize and understand how automations have changed over time. Maintaining digital twins for all automations in a central location — regardless of the RPA platform in which they are designed, deployed and orchestrated — vastly improves automation standardization, governance and visibility.


Stepping up: Becoming a high-potential CEO candidate

Stanford University economics professor Nicholas Bloom, who’s spent his career researching CEOs, describes the reality he’s observed: “It’s frankly a horrible job. I wouldn’t want it. Being a CEO of a big company is a hundred-hour-a-week job. It consumes your life. It consumes your weekend. It’s super stressful. Sure, there’re enormous perks, but it’s also all encompassing.” Reinforcing the point, Microsoft CEO Satya Nadella describes the job as “24/7.” His late mentor Bill Campbell, who had been a CEO three times and was an influential coach to several technology industry leaders, would often remind him, “No one has ever lived to outwork the job. It will always be bigger than you.” Many CEOs secretly agree that the best job in the world is actually the one right below the CEO. There the spotlight burns less brightly, yet the opportunities to make a difference are great, as are the rewards. Without the right motivations and expectations, not only will you find that the effort required to be CEO outweighs any personal gain, but you will also be less likely to succeed. As CCHMC’s Fisher puts it, 


Enterprise IT moves forward — cautiously — with generative AI

The technology also needs human oversight. “Systems like ChatGPT have no idea what they’re authoring, and they’re very good at convincing you that what they’re saying is accurate, even when it’s not,” says Cenkl. There’s no AI assurance — no attribution or reference information letting you know how it came up with its response, and no AI explainability, indicating why something was written the way it was. “You don’t know what the basis is or what parts of the training set are influencing the model,” he says. “What you get is purely an analysis based on an existing data set, so you have opportunities for not just bias but factual errors.” Wittmaier is bullish on the technology, but still not sold on customer-facing deployment of what he sees as an early-stage technology. At this point, he says, there’s short-term potential in the office suite environment, customer contact chatbots, help desk features, and documentation in general, but in terms of safety-related areas in the transportation company’s business, he adds, the answer is a clear no.


Career paths for devops engineers and SREs

Solving business challenges today requires multidisciplinary teams and integrated solutions. If you enjoy problem-solving, shift to other organizational roles and develop broader perspectives on what’s required to deliver end-to-end solutions. One opportunity for developers is to shift to data science and machine learning roles. Tiago Cardoso, a product manager at Hyland, says, “Career paths for developers have become much more flexible and individualized, and I’m seeing a lot of new developer roles appearing, such as data engineers, ML engineers, ML architects, and MLops engineers. He adds, “Common career paths for those in devops and SREs include positions such as systems administrator, infrastructure engineer, and cloud architect.” ... Architect roles and responsibilities vary considerably from one organization to another, but successful architects are more than just technical experts. Architects scale their expertise by helping agile teams learn, apply, and create self-organizing standards around using technology to deliver business solutions.


Zero-Day Vulnerabilities Can Teach Us About Supply-Chain Security

Writing, testing and validating whether a fix will resolve a vulnerability can take trial and error. By definition, zero-days don’t have a patch, meaning it can often be days before developers can even begin the process of patching their applications. Furthermore, software needs to go through QA cycles before a true fix is identified. This is why security controls are necessary for blocking malicious activity before it reaches runtime. Additionally, developers must analyze their software development life cycle (SDLC) and augment it before a vulnerability is announced. An asset or application inventory should be a mandatory component so that when a vulnerability is disclosed, organizations know who owns the application and who to contact. ... Securing third-party or commercial-off-the-shelf software is one of the biggest cybersecurity challenges facing every organization. Unfortunately, most vendors don’t disclose the components and libraries that make up their software, making it difficult for organizations to know whether a vulnerability affects them once it’s disclosed.


Five Factors That Turn CISOs into Firefighters

When a CISO is referred to as a “firefighter,” it typically means that they are spending a significant amount of time responding to security incidents and putting out fires rather than being able to focus on proactively preventing those incidents from occurring in the first place. Here are some reasons why a CISO may become a firefighter: 1. Lack of resources: A CISO may not have sufficient resources (e.g., budget, staff, or technology) to implement a comprehensive cybersecurity program effectively. This can lead to security incidents that require a reactive response. 2. Insufficient risk management: A CISO may not have a robust risk management program in place, which means that security incidents are more likely to occur. Without proper risk management, a CISO may be caught off guard by security incidents and have to react quickly to mitigate the damage. 3. Lack of security awareness: Employees may not be properly trained on cybersecurity best practices, which can lead to security incidents such as phishing attacks or malware infections. ...



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - March 06, 2023

Computer says no. Will fairness survive in the AI age?

A number of risks fall outside of these existing laws and regulations, so while lawmakers might wrestle with the far-reaching ramifications of AI, other industry bodies and other groups are driving the adoption of guidance, standards and frameworks - some of which might become standard industry practice even without the enforcement of law. One illustration is the US' National Institute of Standards and Technology's AI risk management framework, which is intended "for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems". ... Bias is one particularly important element. The algorithms at the centre of AI decision making may not be human, but they can still imbibe the prejudices which hue human judgement. Thankfully, policymakers in the EU appear to be alive to this risk. The bloc's draft EU Artificial Intelligence Act addressed a range of issues on algorithmic bias, arguing technology should be developed to avoid repeating “historical patterns of discrimination” against minority groups, particularly in contexts such as recruitment and finance.


12 programming mistakes to avoid

Some say that a good programmer is someone who looks both ways when crossing a one-way street. But, like playing it fast and loose, this tendency can backfire. Software that is overly buttoned up can slow your operations to a crawl. Checking a few null pointers may not make much difference, but some code is just a little too nervous, checking that the doors are locked again and again so that sleep never comes. ... Scaling well is a challenge and it is often a mistake to overlook the ways that scalability might affect how the system runs. Sometimes, it’s best to consider these problems during the early stages of planning, when thinking is more abstract. Some features, like comparing each data entry to another, are inherently quadratic, which means your optimizations might grow exponentially slower. Dialing back on what you promise can make a big difference. Thinking about how much theory to apply to a problem is a bit of a meta-problem because complexity often increases exponentially. Sometimes the best solution is careful iteration with plenty of time for load testing.


EV Charging Infrastructure Offers an Electric Cyberattack Opportunity

The risks are not just theoretical: A year ago, after Russia invaded Ukraine, hacktivists compromised charging stations near Moscow to disable them and display their support for Ukraine and their contempt for Russian President Vladamir Putin. ... In many ways, EV charging infrastructure represents a perfect storm of technologies. The devices are connected via mobile applications and carry the same risks as other IoT devices, but they're also set to become a critical part of transportation network in the United States, like other operational technology (OT). And because EV charging stations must be connected to public networks, ensuring that their communications are encrypted will be critical to maintaining the security of the devices, says Dragos' Tonkin. "Hacktivists will always be looking for poorly secured devices on public networks, it's important that the owners of EV put in place controls to ensure they are not easy targets," he says. "The crown jewels of the operators of EV chargers have to be their central platforms, the chargers themselves intrinsically trust the instructions pushed down from the center."


Can WebAssembly Solve Serverless’s Problems?

Wasm computing structure is designed in such a way that it has “shifted” the potential of the serverless landscape, Butcher said. This is due, he said, to WebAssemby’s nearly instant startup times, small binary sizes, and platform and architectural neutrality, as Wasm binaries can be executed with a fraction of the resources required to run today’s serverless infrastructure. “Contrasted with heavyweight [virtual machines] and middleweight containers, I like to think of Wasm as the lightweight cloud compute platform,” he noted. “Developers package up only the bare essentials: a Wasm binary and perhaps a few supporting files. And the Wasm runtime takes care of the rest.” An immediate benefit of relying on Wasm’s runtime for serverless is lower latency, especially when extending Wasm’s reach not only beyond the browser but away from the cloud. This is because it can be distributed directly to and on edge devices with relatively low data-to-transfer and computing overhead.


Tracking device technology: A double-edged sword for CISOs

Clearly, the logistics side of the equation means vehicles and things can be tagged and tracked with relative ease. Not only will it help with locating and counting inventory, but the technology can also be used to ensure an alert occurs when those things which are supposed to stay within a specific geographic footprint leave that footprint. Then there is the negative side of the equation, on which employees might use the corporate tracking capability for nefarious purposes or bring their own tracking devices into the corporate environment. But don’t stop with the employee. What of the vendor or the competition? How might they wish to use these tracking devices to garner a bit of competitive intelligence? Tracking the movements of gear or people might be prudent in a specific circumstance — visitors to a corporate building, for example. A badge outfitted with the technology can be monitored to ensure visitors stay within the areas to which they are granted access and, if escorts are required, an escort tag can be issued to provide confirmation that their corporate escort is within proximity.


US Official Reproaches Industry for Bad Cybersecurity

Easterly specifically called out Google's August 2022 debut of Android 13, which was the first Android release in which a majority of the new code added to the release was in a memory-safe language. Easterly said there wasn't a single memory safety vulnerability discovered in the Rust code added to Android 13. Open-source software community Mozilla created Rust in 2015 and currently has a project to integrate Rust into its Firefox web browser. Amazon Web Services has begun to build critical services in Rust, which Easterly said has resulted in both security benefits as well as time and cost savings for the public cloud behemoth. Making memory-safe languages ubiquitous within universities will serve as a building block to companies migrating their key libraries to memory-safe languages, Easterly said. This effort hinges on the technology industry containing, and eventually rolling back, the prevalence of C and C++ in key systems. C and C++ are still written and taught due to the belief that migrating away from them would harm performance.


A key post-quantum algorithm may be vulnerable to side-channel attacks

Quantum computers have the potential to crack the cryptographic algorithms in use today, which is why “post-quantum” cryptographic algorithms are designed to be so strong that they can survive huge leaps in computing power. A team in Sweden, however, says it’s possible to attack some of the new algorithms with other methods. Researchers at the KTH Royal Institute say they found a vulnerability in a specific implementation of CRYSTALS-Kyber — a “quantum safe” algorithm that the U.S. National Institute of Standards and Technology has selected as part of its potential standards for future cryptographic systems. According to the Swedish team, CRYSTALS-Kyber is vulnerable to side-channel attacks, which use information leaked by a computer system to gain unauthorized access or extract sensitive information. Instead of trying to guess a secret key, a side-channel technique analyzes data such as small variations in power consumption or electromagnetic radiation to reconstruct what the machine is doing and find clues that would enable access.


How to achieve and shore up cyber resilience in a recession

With cybercriminals waiting in the wings, concerns about whether it’s a false economy to make cuts in cybersecurity investments is a growing concern. However, investing in expensive security tools will be ineffective if organizations neglect putting the right foundational security practices in place. When it comes to elevating organizational resilience, CIOs don’t need to choose between savings and safety. By reviewing processes, revisiting the basics, making the most of existing resources, and focusing on internal training, organizations can increase their security and digital resilience. Selectively deploying cybersecurity tools and product kits can then complement these good practices in a highly cost-effective way. In a downturn, it pays to reset cybersecurity priorities and review how and where finite resources can best be deployed. Unfortunately, all too often organizations conflate good security practices with good security purchases, in the misbegotten belief that, somehow, it’s possible to “buy security”.


Companies can’t stop using open source

Freely downloadable code has never been truly free (as in cost). The bits might be free, but there’s a cost to manage those bits. Developers always cost more than the code they write or manage. This may be one reason that when enterprises were asked what they most value in “open source leadership,” they responded with “makes it easy to deploy my preferred open source software in the cloud.” Companies increasingly want the benefits of open source without the expense of managing it themselves. ... Despite these problems and despite open source costs, even those who think open source is more expensive than proprietary alternatives say its benefits outweigh those costs. Chesbrough, when conducting the survey for the Linux Foundation, asked about this seemingly counterintuitive finding. “If you think [open source is] more expensive, why are you still using it?” he asked one respondent. Their response? “The code is available.” Meaning, “If we were to construct the code ourselves, that would take some amount of time. ...”


Do you have the courage of your convictions?

A courageous leader also has a healthy appreciation for the fact that sticking your neck out carries the risk of being wrong or failing. Many CEOs and senior leaders are looking to promote managers who have failed and can show they have learned from the experience. They want leaders who take big swings and, if they stumble, figure out what went wrong. But still, we’re all too prone to put up facades of invincibility and perfection, polishing resumes that show a smooth trajectory and consistent record of success. In job interviews, candidates are unwilling to acknowledge any failures or weaknesses beyond the predictable non-answers of “I work too hard” or “I care too much.” “People who don’t make bad decisions are indecisive and risk-averse,” said David Kenny, who was CEO of the Weather Company when I interviewed him years ago (he now runs market research firm Nielsen). “I love hiring people who’ve failed. We’ve got some great people here with some real flameouts.



Quote for the day:

"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong

Daily Tech Digest - March 05, 2023

Transforming transformation

Transformation has been a way of extracting value rather than re-invention. Financial services companies are particularly guilty of this. For example, in banking, digital has been a way of reducing costs by moving the “business of banking” into the hands of the end customer – hence why we all do things ourselves that the bank used to do for us. This focus on cost reduction has meant that processes have been optimised for the digital age at the expense of true innovation. The days of extracting value are almost over for the financial services industry. There are not many places left to reduce costs. So, they must become value creators, which means taking a leaf out of the digital giants’ book and finding ways of identifying and solving problems. ... But, according to Paul Staples, who was, until recently, head of embedded banking for HSBC, success will not be determined by technology but by the proposition, approach, and processes that the banks wrap around it. Pain points and value must be identified up front, forming the basis of what gets delivered.


Five Megatrends Impacting Banking Forever

The first megatrend impacting banking is the democratization of data and insights. More than ever, data is being collected everywhere, and it is the lifeblood of any financial institution. The democratization of data and insights refers to the process of making data and insights accessible to a wider audience, including both employees and customers. ... The explosion of hyper-personalization is driven by the use of significantly larger amounts of data, such as browsing and purchase history, interests and preferences, demographics and even survey information. With advanced technologies that include facial recognition, augmented reality and conversational AI, it is now possible to also offer customers highly personalized experiences that cater to their unique delivery preferences – in near real-time. ... Traditionally, banks and credit unions have viewed their relationship with consumers as a series of transactions. However, in recent years, there has been an increasing focus on providing a seamless and integrated engagement opportunity that can result in a more stable and long-term relationship. 


Understanding the Role of DLT in Healthcare

Finding actual healthcare circumstances where this DLT technology could be useful and relevant is crucial. Instead of implementing a solution without first identifying an issue to answer, organizations must take into account any current requirements or challenges that the technology may help address. Organizations employing this technology must be aware of and receptive to the new organizational paradigms that go along with these solutions. Recognizing the paradigm shift to decentralized, distributed solutions is essential to evaluating this technology. ... In shared ledgers, the validity and consistency of which are maintained by nodes using a variety of processes, including consensus mechanisms, protecting the secrecy of data entail ensuring that only authorized access is granted to data. Institutions are employing a multi-layered strategy for blockchain in healthcare, using private blockchains where all of the linked healthcare organizations are well-known and trusted.


Control the Future of Data with AI and Information Governance

“The average company manages hundreds of terabytes of data. For that data to prove an asset rather than a liability, it must be located, classified, cleansed, and monitored. With so much data entering the organization so quickly from so many disparate sources, conducting those data tasks manually is not feasible.” “For organizations to make accurate data-driven decisions, decision makers need clean, reliable data. By the same token, AI-powered analysis will only prove useful if based on complete and accurate data sets. That requires visibility into all relevant data. And it requires exhaustive checks for errors, duplicates, and outdated information.” “An important aspect of information governance includes data security. Privacy regulations, for example, require that organizations take all reasonable measures to keep confidential data safe from unauthorized access. This includes ensuring against inappropriate sharing and applying encryption to sensitive information.”


BI solution architecture in the Center of Excellence

Designing a robust BI platform is somewhat like building a bridge; a bridge that connects transformed and enriched source data to data consumers. The design of such a complex structure requires an engineering mindset, though it can be one of the most creative and rewarding IT architectures you could design. In a large organization, a BI solution architecture can consist of: Data sources; Data ingestion; Big data / data preparation; Data warehouse; BI semantic models; and Reports. At Microsoft, from the outset we adopted a systems-like approach by investing in framework development. Technical and business process frameworks increase the reuse of design and logic and provide a consistent outcome. They also offer flexibility in architecture leveraging many technologies, and they streamline and reduce engineering overhead via repeatable processes. We learned that well-designed frameworks increase visibility into data lineage, impact analysis, business logic maintenance, managing taxonomy, and streamlining governance. 


When finops costs you more in the end

Don’t overspend on finops governance. The same can be said for finops governance, which controls who can allocate what resources and for what purposes. In many instances, the cost of the finops governance tools exceeds any savings from nagging cloud users into using fewer cloud services. You saved 10%, but the governance systems, including human time, cost way more than that. Also, your users are more annoyed as they are denied access to services they feel they need, so you have a morale hit as well. Be careful with reserved instances. Another thing to watch out for is mismanaging reserved instances. Reserved instances are a way to save money by committing to using a certain number of resources for a set period. But if you’re not optimizing your use of them, you may end up spending more than you need to. Again, the cure is worse than the disease. You’ve decided that using reserved instances, say purchasing cloud storage services ahead of time at a discount, will save you 20% each year. However, you have little control over demand, and if you end up underusing the reserved instances, you still must pay for resources that you didn’t need.


Core Wars Shows the Battle WebAssembly Needs to Win

So the basics are that you have two or more competing programs, running in a virtual space and trying to corrupt each other with code. In summary:The assembler-like language is called Redcode. Redcode is run by a program called MARS. The competitor programs are called “warriors” and are written in Redcode, managed by MARS. The basic unit is not a byte, but an instruction line. MARS executes one instruction at a time, alternatively for each “warrior” program. The core (the memory of the simulated computer), or perhaps “battlefield”, is a continuous wrapping loop of instruction lines, initially empty except for the competing programs, which are set apart. Code is run and data stored directly on these lines. Each Redcode instruction contains three parts: the operation itself (OpCode), the source address and the destination address. ... While in modern chips, code moves through parallel threads in mysterious ways, the Core War setup is still pretty much the basics of how a computer works. However code is written it, we know it ends up as a set of machine code instructions.


Data Fear Looms As India Embraces ChatGPT

Considering the vast amounts of data that OpenAI has amassed without permission—enough that there is a chance that ChatGPT will be trained on blog posts, product reviews, articles and more—its privacy policy raises legitimate concerns. The IP address of visitors, their browser’s type and settings, and the information about how visitors interact with the websites—such as the kind of content they engage with, the features they use, and the actions they take—are all collected by OpenAI in accordance with its privacy policy. Additionally, it compiles information on the user’s website and time-based browsing patterns. OpenAI also states that it may share users’ personal information with unspecified third parties without informing them to meet its business objectives. The lack of clear definitions for terms such as ‘business operation needs’ and ‘certain services and functions’ in the company’s policies creates ambiguity regarding the extent and reasoning for data sharing. To add to the concerns, OpenAI’s privacy policy also states that the user’s personal information may be used for internal or third-party research and could potentially be published or made publicly available.


Booking.com's OAuth Implementation Allows Full Account Takeover

While researchers only divulged how they used OAuth to compromise Booking.com in the report, they discovered other sites with risk from improperly applying the authentication protocol, Balmas tells Dark Reading. "We have observed several other instances of OAuth flaws on popular websites and Web services," he says. "The implications of each issue vary and depends on the bug itself. In our cases, we are talking about full account takeovers across them all. And there are surely many more that are yet to be discovered." OAuth provides an easy solution to bypass the user login process for site owners, reducing friction for which is a "long and frustrating" problem, Balmas says. However, though it seems simple, implementing the technology successfully and securely is actually very complicated in terms of proper technical implementation, and a single small wrong move can have a huge security impact, he says. "To put it in other words — it is very easy to put a working social login functionality on a website, but it is very hard to do it correctly," Balmas tells Dark Reading.


More automation, not just additional tech talent, is what is needed to stay ahead of cybersecurity risks

Just over three-quarters of CISOs believe that their limited bandwidth and lack of resources has led to important security initiatives falling to the wayside, and nearly 80% claimed they have received complaints from board members, colleagues or employees that security tasks are not being handled effectively. ... Stress is also having an impact on hiring. 83% of the CISOs surveyed admitted they have had to compromise on the staff they hire to fill gaps left by employees who have quit their job. “I’ve never tried harder in my career to keep people than I have in the past few years,” said Rader. “It’s so key to hang onto good talent because without those people you’re always going to be stuck focusing on operations instead of strategy.” But there are solutions — and it’s not just finding more talent, says George Tubin, director of product marketing at Cynet. He said CISOs want more automated tools to manage repetitive tasks, better training, and the ability to outsource some of their work.



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup

Daily Tech Digest - March 04, 2023

How security leaders can effectively manage Gen Z staff

Gen Z will look for jobs in organizations that share their values. Gen Z is likely to remind their superiors of such values if they find themselves being asked to do something that goes against such. Be ready for situations like this and make sure the company’s values isn’t just a marketing creation. Another way to look at this is to proactively go after individuals whose values resonate with the company’s. All working generations have experienced pros and cons of work from home, the office or a mix of both. This is unlikely to be a Gen Z-only preference, but younger generations may be more prone to think, “Why do I need to go to a specific location to do a job I can perform from anywhere?” ... The two aspects here are peer training and paid training. Gen Z is eager to learn but also to move forward, now even though this may not be effective to all roles it can be a positive in cybersecurity where attackers and attacks are always evolving fast.


LastPass Hack Highlights Importance of Applicable Acceptable Use Policies

While LastPass has made it clear that several course corrective activities have taken place post-incident to prevent similar hacks, the argument that this type of exploitation was preventable persists. Specifically, one control that should be scrutinized is the LastPass Acceptable Use Policy (AUP). These important documents provide employees with a set of rules applied by the company that explain the methods through which employees may access or use corporate networks, devices or data. Many of these policies require that corporate data may only be accessed and managed on corporate systems. This specific provision allows the organization to control both physical and logical access to important information, such as business operations and client data. As the business world has morphed with a more distributed and remote configuration, corporate AUPs require additional scrutiny as well. Specifically, companies should take a hard look as to the applicability of the Bring Your Own Device (BYOD) mentality and consider the security implications that could emerge through mismanagement.


3 Steps to Unlock the Power of Behavioral Data

In practice, a strong data culture is a “decision culture” according to McKinsey research, which is a culture where an organization can accelerate the application of advanced analytics, powering improved business performance and decision-making. Furthermore, Forrester found that organizations that use data to derive insights for decision-making are almost three times more likely to achieve double-digit growth. So why is it such a challenge to create this type of culture? ... Data creation is the process of creating high-quality, contextual behavioral data to power AI and other advanced data applications. Instead of working with the data exhaust which happens as a result of SaaS applications and black box analytics tools, data creation allows a choice of metrics that would best reflect the organization’s needs. The great thing about this is that it saves data teams quite a lot of time as it continuously delivers a highly trusted real-time stream of data that evolves with the business.


5 steps for building a digital transformation-ready enterprise architecture

In a hyper-competitive and increasingly cloud-based business environment, it's clear that digital-first is the only way forward. Of course, the transformation could have been smoother. For most businesses, it's happened in fits and starts—a program written here, a piece of software implemented there. The end result, in many cases, has been a patchwork: out-of-date applications, redundant or overly complicated programs, and generally clogged internal processes. Think of a big, tangled pile of extension cords—it's unclear what goes where, what can be safely removed, what needs replacing, and so forth. These clogged processes present a serious problem for businesses engaged in digital transformation. They can slow down a company's inner workings and, over time, lead to lost productivity and revenue. That's why it's imperative for companies to clear away the cobwebs and redesign their internal processes for maximal productivity—to, in other words, embark on an organization-wide program of enterprise architecture.


Crucial role of data protection in the battle against ransomware

Central to any cybersecurity strategy being developed is the role of the IT infrastructure teams and storage administrators in the secure storage and protection of data.However, formulating and implementing a strategy alone will not be enough, organisations must rigorously test their resiliency plans. It is essential to identify the cracks in the defences as a proactive strategy, even as learnings are applied reactively. A key reason behind the rise of ransomware attacks is that the attack surface, the systems that are accessible and could be compromised, is massive and constantly growing. The larger the enterprise, the larger the attack surface, as the vulnerable endpoints and pieces of software being used are many. Any breach that occurs, thus must be quickly contained, and its impact as minimised as possible. Merely adding more storage to a data centre is not the solution. Enterprises will need to incorporate immutable storage and encryption technology and optimize the recovery process. 


US Cybersecurity Strategy Shifts Liability Issues to Vendors

The administration envisions that it will roll out more stringent software development practices, work with vendors to implement them in the software development process and then work with industry and Congress to establish a liability shield for companies that adopt those practices. That process will take well over a year, the senior administration official predicts. Veracode founder and Chief Technology Officer Chris Wysopal says drawing from the NIST Secure Software Development Framework for the safe harbor law is more aspirational than realistic since the liability shield must consider a company's maturity and security posture. Kalember says no current institutions are well positioned to assess compliance with NIST or assign blame after a security incident. "We need a few different levels of what building safe software means," Wysopal tells ISMG. "The SSDF is a good starting point, but I think it does need to be more practical and more basic."


The government cannot win at cyber warfare without the private sector

The Council on Foreign Relations (CFR) recommends “a program of deepening public-private collaboration between the Defense Department (DOD) and the defense industry” to stop these hacks. It suggests this because it recognizes that the private sector is who owns and operates the networks and systems that the problem countries target, while the public sector “lacks the same picture of the threat environment.” The CFR is right. Private-sector actors regularly face hackings and understand that their survival in the marketplace hinges upon addressing them swiftly and efficiently. The government, by contrast, doesn’t recognize many of these threats until they occur. The government has the ability to contract with anyone, so why wouldn’t it choose to work more closely with private companies. Consider the case of the Office of Personnel Management, which faced that headline-making 2015 hacking from China. 


Five Factors For Planning A Data Governance Strategy

Effective data governance begins with having a comprehensive record of the data within the organization; however, according to one survey, for two-thirds of organizations, at least half of their data is dark. This dark data represents untapped insights that are not being levered by the organization. Also concerning is the fact that this same absence of quality data and availability can result in an estimated 29% of an employee’s time being spent on non-value-added tasks. ... Data democratization can be shaped by AI-enabled governance policies that control access to the cataloged data. This self-service access to data affords a degree of autonomy for users to work with the data—and the insights it can provide—independently, regardless of their position within the organization. The impact of data democratization can be felt across an entire organization. Users are able to access data securely and work with data on their own without being occupied by tasks that produce no benefit to the organization. As a result, the IT department can be available to handle other important tasks. 


The Move to Unsupervised Learning: Where We Are Today

In addition to the need for explainability, another significant challenge to the widespread adoption of deep learning is the increasing reliance on the need for labeled data, that is, adding labels to raw data such as text files and images to identify them and provide context that machine learning models can recognize and learn from. Supervised learning has made significant and impressive advances in recent years, demonstrating the ability to learn from massive amounts of labeled data. There is, however, a limit to how much AI can advance using supervised learning alone. In many real-world scenarios, the availability of large amounts of labeled data is a challenge — either due to a lack of resources or the inherent nature of the problem itself. Ensuring class balance in the labeled data presents another challenge in that it’s often the case that some classes make up for a large proportion of the data, while other classes might not be adequately represented. Furthermore, ensuring the trustworthiness of labeled data can present another challenge. 


The Biggest Enterprise Architecture Trends in 2023

Most Enterprise Architects endlessly tweak their systems to improve change delivery. As with all things in life, the changes aren't perfect the first time around, and adapting is essential. Each round of change, however small, ultimately improves the system. Many trends overlap and adapting way-of-working ties in with using the social aspects of the architecture described above. Organizations can track the history of change initiatives to see the applications, processes, and information impacted over time. Understanding how the change works gives leaders vital information to make decisions. By tracking people, teams, and departments, organizational and communication pathways become clear. Over time, the tracking shows patterns of where change occurs. When it’s clear where change is happening and failing, the patterns can guide the reorganization of teams. It can also help teams work as independently as possible, improve cross-team coordination, and aid prioritization.



Quote for the day:

"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy

Daily Tech Digest - March 03, 2023

Irish Authorities Levy GDPR Fine in Centric Health Breach

DPC says that while Centric stated in its initial breach notification that 70,000 data subjects were affected by the breach, it only issued notifications to the 2,500 individuals whose data was irretrievably lost in the incident. Besides the inadequate breach communication to affected individuals, the fine levied against Centric also reflects a variety of other GDPR infringements, including "failure to implement technical and organizational measures appropriate to the level of risk" posed to personal and special category data on Centric's server. "The failure to implement the necessary safeguards in an effective manner at the appropriate time led to the possibility of patients' personal data being erroneously disclosed to unauthorized people," the report says. Centric, in a statement provided to Information Security Media Group, says that at the time of the cyberattack, it immediately informed the DPC and cooperated fully with the investigation. "We want to assure our patients that we take our responsibility to protect their data and ensure the security of our IT systems very seriously," Centric says. 


Gitpod flaw shows cloud-based development environments need security assessments

"Many questions remain unanswered with the adoption of cloud-based development environments: What happens if a cloud IDE workspace is infected with malware? What happens when access controls are insufficient and allow cross-user or even cross-organization access to workspaces? What happens when a rogue developer exfiltrates company intellectual property from a cloud-hosted machine outside the visibility of the organization's data loss prevention or endpoint security software?," the Snyk researchers said in their report, which is part of a larger project to investigate the security of CDEs. ... In fact, CDEs are in many ways a big improvement over traditional IDEs: They can eliminate the configuration drift that happens over time with developer workstations/laptops, they can eliminate the dependency collisions that occur when developers work on different projects, and can limit the window for attacks because CDE workspaces run as containers and can be short-lived.


Responsible AI: The research collaboration behind new open-source tools offered by Microsoft

Through its Responsible AI Toolbox, a collection of tools and functionalities designed to help practitioners maximize the benefits of AI systems while mitigating harms, and other efforts for responsible AI, Microsoft offers an alternative: a principled approach to AI development centered around targeted model improvement. Improving models through targeting methods aims to identify solutions tailored to the causes of specific failures. This is a critical part of a model improvement life cycle that not only includes the identification, diagnosis, and mitigation of failures but also the tracking, comparison, and validation of mitigation options. The approach supports practitioners in better addressing failures without introducing new ones or eroding other aspects of model performance. “With targeted model improvement, we’re trying to encourage a more systematic process for improving machine learning in research and practice,” says Besmira Nushi, a Microsoft Principal Researcher involved with the development of tools for supporting responsible AI.


Now Microsoft has a new AI model - Kosmos-1

The researchers also tested how Kosmos-1 performed in the zero-shot Raven IQ test. The results found a "large performance gap between the current model and the average level of adults", but also found that its accuracy showed potential for MLLMs to "perceive abstract conceptual patterns in a nonverbal context" by aligning perception with language models. The research into "web page question answering" is interesting given Microsoft's plan to use Transformer-based language models to make Bing a better rival to Google search. "Web page question answering aims at finding answers to questions from web pages. It requires the model to comprehend both the semantics and the structure of texts. The structure of the web page (such as tables, lists, and HTML layout) plays a key role in how the information is arranged and displayed. The task can help us evaluate our model's ability to understand the semantics and the structure of web pages," the researchers explain.


How AI can improve quality assurance: seven tips

One of the areas where AI is proving its worth for quality assurance is in the software development sector. AI seems particularly well-suited to regression testing. That approach requires checking to ensure previously tested versions of software keep working as expected following code modifications. Or, AI could help create new test cases. Some AI models can recognise or come up with scenarios without prior exposure to them. If you’re thinking about using AI for testing help, confirm which processes that typically take humans the longest to do or where the errors happen most often. Then, assess whether AI might avoid some of those issues and speed up the steps testers typically go through when verifying all is well with new software. Also, keep in mind that using AI for software testing works best when you have a large data set. That’s why training your AI models thoroughly is so necessary, and not a step to take hastily. 


Edge Computing Eats the Cloud?

Additionally, Sedoshkin says that smartphones are “more compact” than a set of GPUs and peripheral components make more sense in an R&D lab environment. He predicts this trend will continue to intensify. “Many real-world applications require the usage of a smartphone anyway, and these devices are capable of running pre-trained neural networks on edge. Smartphone manufacturers will continue increasing computational power and memory capacity on edge devices. However, R&D labs will use specialized hardware for training and testing AI/ML algorithms, and DIY enthusiasts will use specialized lightweight chipsets," Sedoshkin says. In short, there is little to stop the encroach of edge computing on the cloud’s lofty turf. There isn’t much friction to slow it down, either. “The future of edge computing is an evolving landscape; however, ‘ubiquitous’ is the best word that describes it because it will evolve to be all around us,” Tiwari says. And by ubiquitous, industry watchers say they literally mean everywhere.


4 tips to freshen up your IT resume in 2023

Every IT hiring manager looks for professionals who are passionate about their work. And what better way to show this than by discussing your passion projects? In your resume’s contact information section, add a link to any outside projects you’ve worked on over the years, casual or professional. Remember that these don’t need to be overly complex or high-tech – the point is to show you’re passionate about technology outside of work. Even if your contributions involve small edits or suggestions to other people’s code, include them on your resume. That said, your profiles must be up to date. If you haven’t updated it in years, don’t include it. Keeping your IT resume updated and relevant in 2023 is crucial for job seekers in the competitive technology industry. And while many IT professionals get job offers without an optimized resume, an exceptional resume might just be what stands between you and your top-choice company.


The role of human insight in AI-based cybersecurity

Traditional cybersecurity solutions, like secure email gateways (SEGs), rely on pre-defined rules and patterns to identify potential threats. However, these rules and patterns can become outdated quickly, leading to a high rate of false positives and false negatives. Sophisticated phishing attacks can also evade SEG systems as they impersonate known trusted senders or takeover accounts. By using RLHF, the model can learn from human feedback and continuously adapt to new threats as they emerge. Enterprise security teams spend as much as 33% of their time dealing with phishing scams. Since traditional cybersecurity solutions often rely on manual processes, this leads to delays in detecting and responding to potential threats. By combining AI and RLHF, teams can better identify potential threats, resulting in up to a 90% reduction in the amount of time needed to identify and react to phishing scams, while also significantly reducing the organization’s risk posture.


Biden's Cybersecurity Strategy Calls for Software Liability, Tighter Critical Infrastructure Security

The requirements will be performance based, adaptable to changing requirements, and focus on driving adoption of secure-by-design principles. "While voluntary approaches to critical infrastructure security have produced meaningful improvements, the lack of mandatory requirements has resulted in inadequate and inconsistent outcomes," the strategy document said. Regulation can also level the playing field in sectors where operators are in a competition with others to underspend on security because there really is no incentive to implement better security. The strategy provides critical infrastructure operators that might not have the financial and technical resources to meet the new requirements, with potentially new avenues for securing those resources. Joshua Corman, former CISA chief strategist and current vice president of cyber safety at Claroty, says the Biden administration's choice to make critical infrastructure security a priority is an important one.


Interactive Microservices as an Alternative to Micro Front-Ends for Modularizing the UI Layer

Interactive microservices are based on a new type of web API that Qworum defines, the multi-phase web API. What differentiates these APIs from conventional REST or JSON-RPC web APIs is that endpoint calls may involve more than one request-response pair, also called a phase. ... Unbounded composability — Interactive microservices can call other end-points and even themselves during their execution. The maximum depth of allowed nested calls is unbounded, and each call disposes of a full-page UI regardless of nesting depth. This is unlike micro front-ends which typically cannot be nested beyond 1 or 2 levels at most, because the UI surface area that is allocated to each micro front-end becomes vanishingly smaller with increasing nesting depth. General applicability — Qworum services are more generally applicable for distributed applications than micro front-ends, as the latter are generally tied to a particular web application (ad hoc micro front-ends), front-end framework (React micro front-ends, Angular micro front-ends etc) or organisation.



Quote for the day:

"When building a team, I always search first for people who love to win. If I can't find any of those, I look for people who hate to lose." -- H. Ross Perot