Daily Tech Digest - March 06, 2023

Computer says no. Will fairness survive in the AI age?

A number of risks fall outside of these existing laws and regulations, so while lawmakers might wrestle with the far-reaching ramifications of AI, other industry bodies and other groups are driving the adoption of guidance, standards and frameworks - some of which might become standard industry practice even without the enforcement of law. One illustration is the US' National Institute of Standards and Technology's AI risk management framework, which is intended "for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems". ... Bias is one particularly important element. The algorithms at the centre of AI decision making may not be human, but they can still imbibe the prejudices which hue human judgement. Thankfully, policymakers in the EU appear to be alive to this risk. The bloc's draft EU Artificial Intelligence Act addressed a range of issues on algorithmic bias, arguing technology should be developed to avoid repeating “historical patterns of discrimination” against minority groups, particularly in contexts such as recruitment and finance.


12 programming mistakes to avoid

Some say that a good programmer is someone who looks both ways when crossing a one-way street. But, like playing it fast and loose, this tendency can backfire. Software that is overly buttoned up can slow your operations to a crawl. Checking a few null pointers may not make much difference, but some code is just a little too nervous, checking that the doors are locked again and again so that sleep never comes. ... Scaling well is a challenge and it is often a mistake to overlook the ways that scalability might affect how the system runs. Sometimes, it’s best to consider these problems during the early stages of planning, when thinking is more abstract. Some features, like comparing each data entry to another, are inherently quadratic, which means your optimizations might grow exponentially slower. Dialing back on what you promise can make a big difference. Thinking about how much theory to apply to a problem is a bit of a meta-problem because complexity often increases exponentially. Sometimes the best solution is careful iteration with plenty of time for load testing.


EV Charging Infrastructure Offers an Electric Cyberattack Opportunity

The risks are not just theoretical: A year ago, after Russia invaded Ukraine, hacktivists compromised charging stations near Moscow to disable them and display their support for Ukraine and their contempt for Russian President Vladamir Putin. ... In many ways, EV charging infrastructure represents a perfect storm of technologies. The devices are connected via mobile applications and carry the same risks as other IoT devices, but they're also set to become a critical part of transportation network in the United States, like other operational technology (OT). And because EV charging stations must be connected to public networks, ensuring that their communications are encrypted will be critical to maintaining the security of the devices, says Dragos' Tonkin. "Hacktivists will always be looking for poorly secured devices on public networks, it's important that the owners of EV put in place controls to ensure they are not easy targets," he says. "The crown jewels of the operators of EV chargers have to be their central platforms, the chargers themselves intrinsically trust the instructions pushed down from the center."


Can WebAssembly Solve Serverless’s Problems?

Wasm computing structure is designed in such a way that it has “shifted” the potential of the serverless landscape, Butcher said. This is due, he said, to WebAssemby’s nearly instant startup times, small binary sizes, and platform and architectural neutrality, as Wasm binaries can be executed with a fraction of the resources required to run today’s serverless infrastructure. “Contrasted with heavyweight [virtual machines] and middleweight containers, I like to think of Wasm as the lightweight cloud compute platform,” he noted. “Developers package up only the bare essentials: a Wasm binary and perhaps a few supporting files. And the Wasm runtime takes care of the rest.” An immediate benefit of relying on Wasm’s runtime for serverless is lower latency, especially when extending Wasm’s reach not only beyond the browser but away from the cloud. This is because it can be distributed directly to and on edge devices with relatively low data-to-transfer and computing overhead.


Tracking device technology: A double-edged sword for CISOs

Clearly, the logistics side of the equation means vehicles and things can be tagged and tracked with relative ease. Not only will it help with locating and counting inventory, but the technology can also be used to ensure an alert occurs when those things which are supposed to stay within a specific geographic footprint leave that footprint. Then there is the negative side of the equation, on which employees might use the corporate tracking capability for nefarious purposes or bring their own tracking devices into the corporate environment. But don’t stop with the employee. What of the vendor or the competition? How might they wish to use these tracking devices to garner a bit of competitive intelligence? Tracking the movements of gear or people might be prudent in a specific circumstance — visitors to a corporate building, for example. A badge outfitted with the technology can be monitored to ensure visitors stay within the areas to which they are granted access and, if escorts are required, an escort tag can be issued to provide confirmation that their corporate escort is within proximity.


US Official Reproaches Industry for Bad Cybersecurity

Easterly specifically called out Google's August 2022 debut of Android 13, which was the first Android release in which a majority of the new code added to the release was in a memory-safe language. Easterly said there wasn't a single memory safety vulnerability discovered in the Rust code added to Android 13. Open-source software community Mozilla created Rust in 2015 and currently has a project to integrate Rust into its Firefox web browser. Amazon Web Services has begun to build critical services in Rust, which Easterly said has resulted in both security benefits as well as time and cost savings for the public cloud behemoth. Making memory-safe languages ubiquitous within universities will serve as a building block to companies migrating their key libraries to memory-safe languages, Easterly said. This effort hinges on the technology industry containing, and eventually rolling back, the prevalence of C and C++ in key systems. C and C++ are still written and taught due to the belief that migrating away from them would harm performance.


A key post-quantum algorithm may be vulnerable to side-channel attacks

Quantum computers have the potential to crack the cryptographic algorithms in use today, which is why “post-quantum” cryptographic algorithms are designed to be so strong that they can survive huge leaps in computing power. A team in Sweden, however, says it’s possible to attack some of the new algorithms with other methods. Researchers at the KTH Royal Institute say they found a vulnerability in a specific implementation of CRYSTALS-Kyber — a “quantum safe” algorithm that the U.S. National Institute of Standards and Technology has selected as part of its potential standards for future cryptographic systems. According to the Swedish team, CRYSTALS-Kyber is vulnerable to side-channel attacks, which use information leaked by a computer system to gain unauthorized access or extract sensitive information. Instead of trying to guess a secret key, a side-channel technique analyzes data such as small variations in power consumption or electromagnetic radiation to reconstruct what the machine is doing and find clues that would enable access.


How to achieve and shore up cyber resilience in a recession

With cybercriminals waiting in the wings, concerns about whether it’s a false economy to make cuts in cybersecurity investments is a growing concern. However, investing in expensive security tools will be ineffective if organizations neglect putting the right foundational security practices in place. When it comes to elevating organizational resilience, CIOs don’t need to choose between savings and safety. By reviewing processes, revisiting the basics, making the most of existing resources, and focusing on internal training, organizations can increase their security and digital resilience. Selectively deploying cybersecurity tools and product kits can then complement these good practices in a highly cost-effective way. In a downturn, it pays to reset cybersecurity priorities and review how and where finite resources can best be deployed. Unfortunately, all too often organizations conflate good security practices with good security purchases, in the misbegotten belief that, somehow, it’s possible to “buy security”.


Companies can’t stop using open source

Freely downloadable code has never been truly free (as in cost). The bits might be free, but there’s a cost to manage those bits. Developers always cost more than the code they write or manage. This may be one reason that when enterprises were asked what they most value in “open source leadership,” they responded with “makes it easy to deploy my preferred open source software in the cloud.” Companies increasingly want the benefits of open source without the expense of managing it themselves. ... Despite these problems and despite open source costs, even those who think open source is more expensive than proprietary alternatives say its benefits outweigh those costs. Chesbrough, when conducting the survey for the Linux Foundation, asked about this seemingly counterintuitive finding. “If you think [open source is] more expensive, why are you still using it?” he asked one respondent. Their response? “The code is available.” Meaning, “If we were to construct the code ourselves, that would take some amount of time. ...”


Do you have the courage of your convictions?

A courageous leader also has a healthy appreciation for the fact that sticking your neck out carries the risk of being wrong or failing. Many CEOs and senior leaders are looking to promote managers who have failed and can show they have learned from the experience. They want leaders who take big swings and, if they stumble, figure out what went wrong. But still, we’re all too prone to put up facades of invincibility and perfection, polishing resumes that show a smooth trajectory and consistent record of success. In job interviews, candidates are unwilling to acknowledge any failures or weaknesses beyond the predictable non-answers of “I work too hard” or “I care too much.” “People who don’t make bad decisions are indecisive and risk-averse,” said David Kenny, who was CEO of the Weather Company when I interviewed him years ago (he now runs market research firm Nielsen). “I love hiring people who’ve failed. We’ve got some great people here with some real flameouts.



Quote for the day:

"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong

Daily Tech Digest - March 05, 2023

Transforming transformation

Transformation has been a way of extracting value rather than re-invention. Financial services companies are particularly guilty of this. For example, in banking, digital has been a way of reducing costs by moving the “business of banking” into the hands of the end customer – hence why we all do things ourselves that the bank used to do for us. This focus on cost reduction has meant that processes have been optimised for the digital age at the expense of true innovation. The days of extracting value are almost over for the financial services industry. There are not many places left to reduce costs. So, they must become value creators, which means taking a leaf out of the digital giants’ book and finding ways of identifying and solving problems. ... But, according to Paul Staples, who was, until recently, head of embedded banking for HSBC, success will not be determined by technology but by the proposition, approach, and processes that the banks wrap around it. Pain points and value must be identified up front, forming the basis of what gets delivered.


Five Megatrends Impacting Banking Forever

The first megatrend impacting banking is the democratization of data and insights. More than ever, data is being collected everywhere, and it is the lifeblood of any financial institution. The democratization of data and insights refers to the process of making data and insights accessible to a wider audience, including both employees and customers. ... The explosion of hyper-personalization is driven by the use of significantly larger amounts of data, such as browsing and purchase history, interests and preferences, demographics and even survey information. With advanced technologies that include facial recognition, augmented reality and conversational AI, it is now possible to also offer customers highly personalized experiences that cater to their unique delivery preferences – in near real-time. ... Traditionally, banks and credit unions have viewed their relationship with consumers as a series of transactions. However, in recent years, there has been an increasing focus on providing a seamless and integrated engagement opportunity that can result in a more stable and long-term relationship. 


Understanding the Role of DLT in Healthcare

Finding actual healthcare circumstances where this DLT technology could be useful and relevant is crucial. Instead of implementing a solution without first identifying an issue to answer, organizations must take into account any current requirements or challenges that the technology may help address. Organizations employing this technology must be aware of and receptive to the new organizational paradigms that go along with these solutions. Recognizing the paradigm shift to decentralized, distributed solutions is essential to evaluating this technology. ... In shared ledgers, the validity and consistency of which are maintained by nodes using a variety of processes, including consensus mechanisms, protecting the secrecy of data entail ensuring that only authorized access is granted to data. Institutions are employing a multi-layered strategy for blockchain in healthcare, using private blockchains where all of the linked healthcare organizations are well-known and trusted.


Control the Future of Data with AI and Information Governance

“The average company manages hundreds of terabytes of data. For that data to prove an asset rather than a liability, it must be located, classified, cleansed, and monitored. With so much data entering the organization so quickly from so many disparate sources, conducting those data tasks manually is not feasible.” “For organizations to make accurate data-driven decisions, decision makers need clean, reliable data. By the same token, AI-powered analysis will only prove useful if based on complete and accurate data sets. That requires visibility into all relevant data. And it requires exhaustive checks for errors, duplicates, and outdated information.” “An important aspect of information governance includes data security. Privacy regulations, for example, require that organizations take all reasonable measures to keep confidential data safe from unauthorized access. This includes ensuring against inappropriate sharing and applying encryption to sensitive information.”


BI solution architecture in the Center of Excellence

Designing a robust BI platform is somewhat like building a bridge; a bridge that connects transformed and enriched source data to data consumers. The design of such a complex structure requires an engineering mindset, though it can be one of the most creative and rewarding IT architectures you could design. In a large organization, a BI solution architecture can consist of: Data sources; Data ingestion; Big data / data preparation; Data warehouse; BI semantic models; and Reports. At Microsoft, from the outset we adopted a systems-like approach by investing in framework development. Technical and business process frameworks increase the reuse of design and logic and provide a consistent outcome. They also offer flexibility in architecture leveraging many technologies, and they streamline and reduce engineering overhead via repeatable processes. We learned that well-designed frameworks increase visibility into data lineage, impact analysis, business logic maintenance, managing taxonomy, and streamlining governance. 


When finops costs you more in the end

Don’t overspend on finops governance. The same can be said for finops governance, which controls who can allocate what resources and for what purposes. In many instances, the cost of the finops governance tools exceeds any savings from nagging cloud users into using fewer cloud services. You saved 10%, but the governance systems, including human time, cost way more than that. Also, your users are more annoyed as they are denied access to services they feel they need, so you have a morale hit as well. Be careful with reserved instances. Another thing to watch out for is mismanaging reserved instances. Reserved instances are a way to save money by committing to using a certain number of resources for a set period. But if you’re not optimizing your use of them, you may end up spending more than you need to. Again, the cure is worse than the disease. You’ve decided that using reserved instances, say purchasing cloud storage services ahead of time at a discount, will save you 20% each year. However, you have little control over demand, and if you end up underusing the reserved instances, you still must pay for resources that you didn’t need.


Core Wars Shows the Battle WebAssembly Needs to Win

So the basics are that you have two or more competing programs, running in a virtual space and trying to corrupt each other with code. In summary:The assembler-like language is called Redcode. Redcode is run by a program called MARS. The competitor programs are called “warriors” and are written in Redcode, managed by MARS. The basic unit is not a byte, but an instruction line. MARS executes one instruction at a time, alternatively for each “warrior” program. The core (the memory of the simulated computer), or perhaps “battlefield”, is a continuous wrapping loop of instruction lines, initially empty except for the competing programs, which are set apart. Code is run and data stored directly on these lines. Each Redcode instruction contains three parts: the operation itself (OpCode), the source address and the destination address. ... While in modern chips, code moves through parallel threads in mysterious ways, the Core War setup is still pretty much the basics of how a computer works. However code is written it, we know it ends up as a set of machine code instructions.


Data Fear Looms As India Embraces ChatGPT

Considering the vast amounts of data that OpenAI has amassed without permission—enough that there is a chance that ChatGPT will be trained on blog posts, product reviews, articles and more—its privacy policy raises legitimate concerns. The IP address of visitors, their browser’s type and settings, and the information about how visitors interact with the websites—such as the kind of content they engage with, the features they use, and the actions they take—are all collected by OpenAI in accordance with its privacy policy. Additionally, it compiles information on the user’s website and time-based browsing patterns. OpenAI also states that it may share users’ personal information with unspecified third parties without informing them to meet its business objectives. The lack of clear definitions for terms such as ‘business operation needs’ and ‘certain services and functions’ in the company’s policies creates ambiguity regarding the extent and reasoning for data sharing. To add to the concerns, OpenAI’s privacy policy also states that the user’s personal information may be used for internal or third-party research and could potentially be published or made publicly available.


Booking.com's OAuth Implementation Allows Full Account Takeover

While researchers only divulged how they used OAuth to compromise Booking.com in the report, they discovered other sites with risk from improperly applying the authentication protocol, Balmas tells Dark Reading. "We have observed several other instances of OAuth flaws on popular websites and Web services," he says. "The implications of each issue vary and depends on the bug itself. In our cases, we are talking about full account takeovers across them all. And there are surely many more that are yet to be discovered." OAuth provides an easy solution to bypass the user login process for site owners, reducing friction for which is a "long and frustrating" problem, Balmas says. However, though it seems simple, implementing the technology successfully and securely is actually very complicated in terms of proper technical implementation, and a single small wrong move can have a huge security impact, he says. "To put it in other words — it is very easy to put a working social login functionality on a website, but it is very hard to do it correctly," Balmas tells Dark Reading.


More automation, not just additional tech talent, is what is needed to stay ahead of cybersecurity risks

Just over three-quarters of CISOs believe that their limited bandwidth and lack of resources has led to important security initiatives falling to the wayside, and nearly 80% claimed they have received complaints from board members, colleagues or employees that security tasks are not being handled effectively. ... Stress is also having an impact on hiring. 83% of the CISOs surveyed admitted they have had to compromise on the staff they hire to fill gaps left by employees who have quit their job. “I’ve never tried harder in my career to keep people than I have in the past few years,” said Rader. “It’s so key to hang onto good talent because without those people you’re always going to be stuck focusing on operations instead of strategy.” But there are solutions — and it’s not just finding more talent, says George Tubin, director of product marketing at Cynet. He said CISOs want more automated tools to manage repetitive tasks, better training, and the ability to outsource some of their work.



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup

Daily Tech Digest - March 04, 2023

How security leaders can effectively manage Gen Z staff

Gen Z will look for jobs in organizations that share their values. Gen Z is likely to remind their superiors of such values if they find themselves being asked to do something that goes against such. Be ready for situations like this and make sure the company’s values isn’t just a marketing creation. Another way to look at this is to proactively go after individuals whose values resonate with the company’s. All working generations have experienced pros and cons of work from home, the office or a mix of both. This is unlikely to be a Gen Z-only preference, but younger generations may be more prone to think, “Why do I need to go to a specific location to do a job I can perform from anywhere?” ... The two aspects here are peer training and paid training. Gen Z is eager to learn but also to move forward, now even though this may not be effective to all roles it can be a positive in cybersecurity where attackers and attacks are always evolving fast.


LastPass Hack Highlights Importance of Applicable Acceptable Use Policies

While LastPass has made it clear that several course corrective activities have taken place post-incident to prevent similar hacks, the argument that this type of exploitation was preventable persists. Specifically, one control that should be scrutinized is the LastPass Acceptable Use Policy (AUP). These important documents provide employees with a set of rules applied by the company that explain the methods through which employees may access or use corporate networks, devices or data. Many of these policies require that corporate data may only be accessed and managed on corporate systems. This specific provision allows the organization to control both physical and logical access to important information, such as business operations and client data. As the business world has morphed with a more distributed and remote configuration, corporate AUPs require additional scrutiny as well. Specifically, companies should take a hard look as to the applicability of the Bring Your Own Device (BYOD) mentality and consider the security implications that could emerge through mismanagement.


3 Steps to Unlock the Power of Behavioral Data

In practice, a strong data culture is a “decision culture” according to McKinsey research, which is a culture where an organization can accelerate the application of advanced analytics, powering improved business performance and decision-making. Furthermore, Forrester found that organizations that use data to derive insights for decision-making are almost three times more likely to achieve double-digit growth. So why is it such a challenge to create this type of culture? ... Data creation is the process of creating high-quality, contextual behavioral data to power AI and other advanced data applications. Instead of working with the data exhaust which happens as a result of SaaS applications and black box analytics tools, data creation allows a choice of metrics that would best reflect the organization’s needs. The great thing about this is that it saves data teams quite a lot of time as it continuously delivers a highly trusted real-time stream of data that evolves with the business.


5 steps for building a digital transformation-ready enterprise architecture

In a hyper-competitive and increasingly cloud-based business environment, it's clear that digital-first is the only way forward. Of course, the transformation could have been smoother. For most businesses, it's happened in fits and starts—a program written here, a piece of software implemented there. The end result, in many cases, has been a patchwork: out-of-date applications, redundant or overly complicated programs, and generally clogged internal processes. Think of a big, tangled pile of extension cords—it's unclear what goes where, what can be safely removed, what needs replacing, and so forth. These clogged processes present a serious problem for businesses engaged in digital transformation. They can slow down a company's inner workings and, over time, lead to lost productivity and revenue. That's why it's imperative for companies to clear away the cobwebs and redesign their internal processes for maximal productivity—to, in other words, embark on an organization-wide program of enterprise architecture.


Crucial role of data protection in the battle against ransomware

Central to any cybersecurity strategy being developed is the role of the IT infrastructure teams and storage administrators in the secure storage and protection of data.However, formulating and implementing a strategy alone will not be enough, organisations must rigorously test their resiliency plans. It is essential to identify the cracks in the defences as a proactive strategy, even as learnings are applied reactively. A key reason behind the rise of ransomware attacks is that the attack surface, the systems that are accessible and could be compromised, is massive and constantly growing. The larger the enterprise, the larger the attack surface, as the vulnerable endpoints and pieces of software being used are many. Any breach that occurs, thus must be quickly contained, and its impact as minimised as possible. Merely adding more storage to a data centre is not the solution. Enterprises will need to incorporate immutable storage and encryption technology and optimize the recovery process. 


US Cybersecurity Strategy Shifts Liability Issues to Vendors

The administration envisions that it will roll out more stringent software development practices, work with vendors to implement them in the software development process and then work with industry and Congress to establish a liability shield for companies that adopt those practices. That process will take well over a year, the senior administration official predicts. Veracode founder and Chief Technology Officer Chris Wysopal says drawing from the NIST Secure Software Development Framework for the safe harbor law is more aspirational than realistic since the liability shield must consider a company's maturity and security posture. Kalember says no current institutions are well positioned to assess compliance with NIST or assign blame after a security incident. "We need a few different levels of what building safe software means," Wysopal tells ISMG. "The SSDF is a good starting point, but I think it does need to be more practical and more basic."


The government cannot win at cyber warfare without the private sector

The Council on Foreign Relations (CFR) recommends “a program of deepening public-private collaboration between the Defense Department (DOD) and the defense industry” to stop these hacks. It suggests this because it recognizes that the private sector is who owns and operates the networks and systems that the problem countries target, while the public sector “lacks the same picture of the threat environment.” The CFR is right. Private-sector actors regularly face hackings and understand that their survival in the marketplace hinges upon addressing them swiftly and efficiently. The government, by contrast, doesn’t recognize many of these threats until they occur. The government has the ability to contract with anyone, so why wouldn’t it choose to work more closely with private companies. Consider the case of the Office of Personnel Management, which faced that headline-making 2015 hacking from China. 


Five Factors For Planning A Data Governance Strategy

Effective data governance begins with having a comprehensive record of the data within the organization; however, according to one survey, for two-thirds of organizations, at least half of their data is dark. This dark data represents untapped insights that are not being levered by the organization. Also concerning is the fact that this same absence of quality data and availability can result in an estimated 29% of an employee’s time being spent on non-value-added tasks. ... Data democratization can be shaped by AI-enabled governance policies that control access to the cataloged data. This self-service access to data affords a degree of autonomy for users to work with the data—and the insights it can provide—independently, regardless of their position within the organization. The impact of data democratization can be felt across an entire organization. Users are able to access data securely and work with data on their own without being occupied by tasks that produce no benefit to the organization. As a result, the IT department can be available to handle other important tasks. 


The Move to Unsupervised Learning: Where We Are Today

In addition to the need for explainability, another significant challenge to the widespread adoption of deep learning is the increasing reliance on the need for labeled data, that is, adding labels to raw data such as text files and images to identify them and provide context that machine learning models can recognize and learn from. Supervised learning has made significant and impressive advances in recent years, demonstrating the ability to learn from massive amounts of labeled data. There is, however, a limit to how much AI can advance using supervised learning alone. In many real-world scenarios, the availability of large amounts of labeled data is a challenge — either due to a lack of resources or the inherent nature of the problem itself. Ensuring class balance in the labeled data presents another challenge in that it’s often the case that some classes make up for a large proportion of the data, while other classes might not be adequately represented. Furthermore, ensuring the trustworthiness of labeled data can present another challenge. 


The Biggest Enterprise Architecture Trends in 2023

Most Enterprise Architects endlessly tweak their systems to improve change delivery. As with all things in life, the changes aren't perfect the first time around, and adapting is essential. Each round of change, however small, ultimately improves the system. Many trends overlap and adapting way-of-working ties in with using the social aspects of the architecture described above. Organizations can track the history of change initiatives to see the applications, processes, and information impacted over time. Understanding how the change works gives leaders vital information to make decisions. By tracking people, teams, and departments, organizational and communication pathways become clear. Over time, the tracking shows patterns of where change occurs. When it’s clear where change is happening and failing, the patterns can guide the reorganization of teams. It can also help teams work as independently as possible, improve cross-team coordination, and aid prioritization.



Quote for the day:

"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy

Daily Tech Digest - March 03, 2023

Irish Authorities Levy GDPR Fine in Centric Health Breach

DPC says that while Centric stated in its initial breach notification that 70,000 data subjects were affected by the breach, it only issued notifications to the 2,500 individuals whose data was irretrievably lost in the incident. Besides the inadequate breach communication to affected individuals, the fine levied against Centric also reflects a variety of other GDPR infringements, including "failure to implement technical and organizational measures appropriate to the level of risk" posed to personal and special category data on Centric's server. "The failure to implement the necessary safeguards in an effective manner at the appropriate time led to the possibility of patients' personal data being erroneously disclosed to unauthorized people," the report says. Centric, in a statement provided to Information Security Media Group, says that at the time of the cyberattack, it immediately informed the DPC and cooperated fully with the investigation. "We want to assure our patients that we take our responsibility to protect their data and ensure the security of our IT systems very seriously," Centric says. 


Gitpod flaw shows cloud-based development environments need security assessments

"Many questions remain unanswered with the adoption of cloud-based development environments: What happens if a cloud IDE workspace is infected with malware? What happens when access controls are insufficient and allow cross-user or even cross-organization access to workspaces? What happens when a rogue developer exfiltrates company intellectual property from a cloud-hosted machine outside the visibility of the organization's data loss prevention or endpoint security software?," the Snyk researchers said in their report, which is part of a larger project to investigate the security of CDEs. ... In fact, CDEs are in many ways a big improvement over traditional IDEs: They can eliminate the configuration drift that happens over time with developer workstations/laptops, they can eliminate the dependency collisions that occur when developers work on different projects, and can limit the window for attacks because CDE workspaces run as containers and can be short-lived.


Responsible AI: The research collaboration behind new open-source tools offered by Microsoft

Through its Responsible AI Toolbox, a collection of tools and functionalities designed to help practitioners maximize the benefits of AI systems while mitigating harms, and other efforts for responsible AI, Microsoft offers an alternative: a principled approach to AI development centered around targeted model improvement. Improving models through targeting methods aims to identify solutions tailored to the causes of specific failures. This is a critical part of a model improvement life cycle that not only includes the identification, diagnosis, and mitigation of failures but also the tracking, comparison, and validation of mitigation options. The approach supports practitioners in better addressing failures without introducing new ones or eroding other aspects of model performance. “With targeted model improvement, we’re trying to encourage a more systematic process for improving machine learning in research and practice,” says Besmira Nushi, a Microsoft Principal Researcher involved with the development of tools for supporting responsible AI.


Now Microsoft has a new AI model - Kosmos-1

The researchers also tested how Kosmos-1 performed in the zero-shot Raven IQ test. The results found a "large performance gap between the current model and the average level of adults", but also found that its accuracy showed potential for MLLMs to "perceive abstract conceptual patterns in a nonverbal context" by aligning perception with language models. The research into "web page question answering" is interesting given Microsoft's plan to use Transformer-based language models to make Bing a better rival to Google search. "Web page question answering aims at finding answers to questions from web pages. It requires the model to comprehend both the semantics and the structure of texts. The structure of the web page (such as tables, lists, and HTML layout) plays a key role in how the information is arranged and displayed. The task can help us evaluate our model's ability to understand the semantics and the structure of web pages," the researchers explain.


How AI can improve quality assurance: seven tips

One of the areas where AI is proving its worth for quality assurance is in the software development sector. AI seems particularly well-suited to regression testing. That approach requires checking to ensure previously tested versions of software keep working as expected following code modifications. Or, AI could help create new test cases. Some AI models can recognise or come up with scenarios without prior exposure to them. If you’re thinking about using AI for testing help, confirm which processes that typically take humans the longest to do or where the errors happen most often. Then, assess whether AI might avoid some of those issues and speed up the steps testers typically go through when verifying all is well with new software. Also, keep in mind that using AI for software testing works best when you have a large data set. That’s why training your AI models thoroughly is so necessary, and not a step to take hastily. 


Edge Computing Eats the Cloud?

Additionally, Sedoshkin says that smartphones are “more compact” than a set of GPUs and peripheral components make more sense in an R&D lab environment. He predicts this trend will continue to intensify. “Many real-world applications require the usage of a smartphone anyway, and these devices are capable of running pre-trained neural networks on edge. Smartphone manufacturers will continue increasing computational power and memory capacity on edge devices. However, R&D labs will use specialized hardware for training and testing AI/ML algorithms, and DIY enthusiasts will use specialized lightweight chipsets," Sedoshkin says. In short, there is little to stop the encroach of edge computing on the cloud’s lofty turf. There isn’t much friction to slow it down, either. “The future of edge computing is an evolving landscape; however, ‘ubiquitous’ is the best word that describes it because it will evolve to be all around us,” Tiwari says. And by ubiquitous, industry watchers say they literally mean everywhere.


4 tips to freshen up your IT resume in 2023

Every IT hiring manager looks for professionals who are passionate about their work. And what better way to show this than by discussing your passion projects? In your resume’s contact information section, add a link to any outside projects you’ve worked on over the years, casual or professional. Remember that these don’t need to be overly complex or high-tech – the point is to show you’re passionate about technology outside of work. Even if your contributions involve small edits or suggestions to other people’s code, include them on your resume. That said, your profiles must be up to date. If you haven’t updated it in years, don’t include it. Keeping your IT resume updated and relevant in 2023 is crucial for job seekers in the competitive technology industry. And while many IT professionals get job offers without an optimized resume, an exceptional resume might just be what stands between you and your top-choice company.


The role of human insight in AI-based cybersecurity

Traditional cybersecurity solutions, like secure email gateways (SEGs), rely on pre-defined rules and patterns to identify potential threats. However, these rules and patterns can become outdated quickly, leading to a high rate of false positives and false negatives. Sophisticated phishing attacks can also evade SEG systems as they impersonate known trusted senders or takeover accounts. By using RLHF, the model can learn from human feedback and continuously adapt to new threats as they emerge. Enterprise security teams spend as much as 33% of their time dealing with phishing scams. Since traditional cybersecurity solutions often rely on manual processes, this leads to delays in detecting and responding to potential threats. By combining AI and RLHF, teams can better identify potential threats, resulting in up to a 90% reduction in the amount of time needed to identify and react to phishing scams, while also significantly reducing the organization’s risk posture.


Biden's Cybersecurity Strategy Calls for Software Liability, Tighter Critical Infrastructure Security

The requirements will be performance based, adaptable to changing requirements, and focus on driving adoption of secure-by-design principles. "While voluntary approaches to critical infrastructure security have produced meaningful improvements, the lack of mandatory requirements has resulted in inadequate and inconsistent outcomes," the strategy document said. Regulation can also level the playing field in sectors where operators are in a competition with others to underspend on security because there really is no incentive to implement better security. The strategy provides critical infrastructure operators that might not have the financial and technical resources to meet the new requirements, with potentially new avenues for securing those resources. Joshua Corman, former CISA chief strategist and current vice president of cyber safety at Claroty, says the Biden administration's choice to make critical infrastructure security a priority is an important one.


Interactive Microservices as an Alternative to Micro Front-Ends for Modularizing the UI Layer

Interactive microservices are based on a new type of web API that Qworum defines, the multi-phase web API. What differentiates these APIs from conventional REST or JSON-RPC web APIs is that endpoint calls may involve more than one request-response pair, also called a phase. ... Unbounded composability — Interactive microservices can call other end-points and even themselves during their execution. The maximum depth of allowed nested calls is unbounded, and each call disposes of a full-page UI regardless of nesting depth. This is unlike micro front-ends which typically cannot be nested beyond 1 or 2 levels at most, because the UI surface area that is allocated to each micro front-end becomes vanishingly smaller with increasing nesting depth. General applicability — Qworum services are more generally applicable for distributed applications than micro front-ends, as the latter are generally tied to a particular web application (ad hoc micro front-ends), front-end framework (React micro front-ends, Angular micro front-ends etc) or organisation.



Quote for the day:

"When building a team, I always search first for people who love to win. If I can't find any of those, I look for people who hate to lose." -- H. Ross Perot

Daily Tech Digest - March 02, 2023

Cyberattackers Double Down on Bypassing MFA

MFA flooding, where an attacker will repeatedly attempt to log in using stolen credentials to create a deluge of push notifications, aims at taking advantage of users' fatigue for security warnings. "Push notifications are a step up from SMS, but are susceptible to MFA flooding and MFA fatigue attacks, bombarding the victim with notifications in the hope they will click 'Allow' on one of them," Caulfield says. Another popular tactic — the account reset attack — aims to fool tech support into giving attackers control of a targeted account, an approach that led to the successful compromise of the developer Slack channel for Take-Two Interactive's Rockstar Games, the maker of the Grand Theft Auto franchise. "An attacker will compromise a user’s credentials, and then pose as a vendor or IT employee and ask the user for a verification code or to approve an MFA prompt on their phone," says Jordan LaRose, practice director for infrastructure security at NCC Group. "Attackers will often use the information they’ve already compromised as part of the social engineering attack to lull users into a false sense of security."


Three Trends That Could Impact Data Management In 2023

Like cybercrime, the digitization of the customer experience is almost as old as the computer itself, but it really came into its own in the mobile age. What some call digitization 1.0 was all about “mobile, simplified design and new kinds of applications.” Digitization 2.0 homed in on customer demand—“apps anywhere, anytime, on any interface, and with any method of interaction: voice, social media, chat, texting, wearables, and even when you are sitting in your car.” What I’m calling digitization 3.0 here is a doubling down on both 1.0 and 2.0 to make data even more usable and provide unprecedented access to it. I began this article by highlighting the value of data. I think the companies that can extract even more value from their data while maintaining its security, resiliency and privacy will be the ones that not only survive today’s economic uncertainty but thrive during and after it. The key is contextualizing your data to make it more useful. This starts with the steps I’ve listed in the preceding two sections as a foundation.


Best and worst data breach responses highlight the do's and don'ts of IR

When it comes to data breaches, is there a sliding scale? In other words, if a tiny school district gets hit with a ransomware attack, do we give the IT team a partial pass because they probably lack the resources and skill level of a more tech-savvy company? On the other hand, if a company whose entire business model is based on protecting user passwords gets hacked, do we judge them more harshly? Which brings us to LastPass, which experienced an embarrassing breach that was first announced in August 2022 as simply a minor incident confined to the application development environment. By December that breach had spread to customer data including company names, end-user names, billing addresses, email addresses, telephone numbers, and IP addresses. LastPass gets high marks for transparency. The company continued to issue public updates following the initial August announcement. 


‘Digital twin’ tech is twice as great as the metaverse

A “digital twin” is not an inert model. It’s a personalized, individualized, dynamically evolving digital or virtual model of a physical system. It’s dynamic in the sense that everything that happens to the physical system also happens to the digital twin — repairs, upgrades, damage, aging, etc.Companies are already using “digital twins” for integration, testing, monitoring, simulation, predictive maintenance on bridges, buildings, wind farms, aircraft and factories. But these are still very early days in the “digital twin” realm. ... A digital twin system has three parts: The physical system, the virtual digital copy of that physical system and a communications channel linking the two. Increasingly, this communication is the relaying of sensor data from the physical system. It’s made from three major technology categories. If you imagine a Venn diagram of “metaverse” technologies in one circle, “IoT” in a second circle and “AI” in the third, “digital twin” technology occupies the overlapping center. Digital twins are different from models or simulations in that they are far more complex and extensive and change with incoming data from the physical twin....” 


Backup testing: The why, what, when and how

The aim of all testing is to ensure you can recover data. Those recoveries might be of individual files, volumes, particular datasets – associated with an application, for example, or even an entire site, or several. So, testing has to happen at differing levels of granularity to be effective. That means the differing levels of file, volume, site, and so on, as above. But it also means by workload and system type, such as archive, database, application, virtual machine or discrete systems. At the same time, the backup landscape in an organisation is subject to constant change, as new applications are brought online, and as the location of data changes. This is more the case than ever with the use of the cloud, as applications are developed in increasingly rapid cycles, and by novel methods of deployment such as containers. ... So, it’s likely that testing will take place at different levels of the organisation on a schedule that balances practicality with necessity and importance. Meanwhile, that testing must consider the constantly changing backup landscape.


Considerations for Developing Cybersecurity Awareness Training

Regular, ongoing cybersecurity awareness training is important, and the best time to start is during the new employee onboarding process. This sets the correct expectations in terms of what to do and what not to do before a new employee has access to the enterprise’s information assets or data. ... Enterprises may consider using classroom-based training (physical, virtual or a mixture of both) or a learning management system (LMS) to automate the delivery and tracking of cybersecurity awareness training. There are many online LMS providers, such as Absorb LMS and SAP Litmos, and they provide useful tools for creating online courses, quizzes and surveys. After online courses are created, an enterprise can use the LMS to organize and distribute online courses to its employees as needed. The LMS can also be used to monitor training progress, view analytics and allow employees to provide feedback in order for the enterprise to recalibrate its learning program for maximum impact.


AI and data privacy: protecting information in a new era

First of all, business technology leaders should consider whether they need AI and whether their problems can't be solved by more conventional methods. There is nothing worse than the "I want ML/AI solutions in my business, but I don't know what for yet" approach. To introduce AI you need to consider the entire architecture that will build, train and deploy models and consider how to collect and process large amounts of data. This requires assembling a good team, consisting of people such as data engineers, ML engineers and data scientists. It’s necessary to process large amounts of data and master many tools, so it is not as simple as writing a web application in a standard framework. Tech leaders should also be aware that AI comes with risks. They will need more and more computing resources to build increasingly sophisticated AI platforms. They will need to stay constantly abreast of news from the world of AI where everything changes rapidly, and it may turn out that in six months a much better solution or model for a particular problem has already been created.


Think carefully before considering cloud repatriation

It’s particularly difficult for smaller companies to repatriate, simply because, at their scale, the savings aren’t worth the effort. Why buy real estate and hardware and pay extra salaries only to save a small amount? By contrast, very large companies have the scale to repatriate, But do they want to? “Do Visa, American Express or Goldman Sachs want to be in the IT hardware business?” asks Sample, rhetorically. “Do they want to try to take a modest gain by moving far outside their competency?” Switching can also be complicated when the cost of change isn’t considered part of the calculation. A marginal run rate savings gained from pulling an application back on-prem may be offset by the cost of change, which includes disrupting the business and missing out on opportunities to do other things, such as upgrading the systems that help generate revenue. A major transition may also cause down time—sometimes planned and other times unplanned. “A seamless cutover is rarely possible when you’re moving back to private infrastructure,” says Sample. 


The High Costs of Going Cloud-Native

When it comes to reasons to move to the public cloud, “saving money” has long since been replaced with “increased agility.” Like the systems vendors they are in the process of displacing, AWS, Microsoft Azure, and Google Cloud have figured out how to maximize their customers’ infrastructure spend, with a little (actually a lot of) help from the unalterable economics of data gravity. ... Titled “Cloud-Native Development Report: The High Cost of Ownership,” the white paper tracks the journey of a hypothetical company as it embarks on a transition to cloud-native computing. ... While many of the technologies at the core of the cloud-native approach, such as Kubernetes, are free and open source, there are substantial costs incurred, predominantly around the infrastructure and the personnel. In particular, the costs of finding IT practitioners with experience in using and deploying cloud-native technologies was among the biggest hurdles identified by the report, which you can read here.


Security automation: 3 priorities for CIOs

To begin with, the CIO has choices to make about the testing approaches that will be deployed. Automation in AppSec can refer to tools and processes, ranging from automated vulnerability scanning (dynamic analysis) and static code analysis to software composition analysis and other types of security testing. The most advanced approaches can take things a step further by combining multiple forms of testing – perhaps augmenting DAST with interactive application security testing (IAST) and software composition analysis (SCA) – into a single scan for a comprehensive analysis of the organization’s security risk posture in a single frame. ... Meanwhile, in workflow terms, IT leaders should use customizable solutions to trigger scans at certain points in the development pipeline or based on a predefined schedule. This will allow CIOs and their teams to coordinate scans at specific times or in response to certain events like deploying new code or detecting a security incident.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - March 01, 2023

Intel Releases Quantum Software Development Kit Version 1.0 to Grow Developer Ecosystem

The SDK is a customizable and expandable platform providing greater flexibility when developing quantum applications. It also provides for users to compare compiler files, a standard feature in classical computing development, to discern how well an algorithm is optimized in the compiler. It allows users to see the source code and obtain lower levels of abstraction, gaining insight into how a system stores data. Additional features include: Code in familiar patterns - Intel has extended the industry-standard LLVM with quantum extensions and developed a quantum runtime environment that is modified for quantum computing, and the IQS provides a state-vector simulation of a universal quantum computer. Efficient execution of hybrid classical-quantum workflows - The compiler extensions allow developers to integrate results from quantum algorithms into their C++ project, opening the door to the feedback loops needed for hybrid quantum-classical algorithms like the quantum approximate optimization algorithm (QAOA) and quantum variational eigen-solver (VQE).


Day in the Life of a Chief Developer Experience Officer (CDXO)

According to Cauduro, the overarching goal is to put himself in the developer’s shoes—he constantly thinks about common developer workflows and considers how to create a seamless experience throughout the entire development life cycle. Next is spreading awareness throughout the company about DX principles and how to increase DX within their offerings. A CDXO will likely answer directly to executive leadership but might interface with many departments. A CDXO may direct teams to construct developer-specific tools, like libraries, documentation, SDKs and self-service environments. “DX is a mindset,” said Cauduro. “The whole company needs to be engaged in it.” “As with any C-level position, your job is on the one hand to make your team’s life easier in any way you can,” said Burazin. “And on the other to convey the developers’ issues or ideas to the company in hopes of nudging the company in the correct direction.”


ChatGPT vs GDPR – what AI chatbots mean for data privacy

As an open tool, the billions of data points ChatGPT is trained on are made accessible to malicious actors who could use this information to carry out any number of targeted attacks. One of the most concerning capabilities of ChatGPT is its potential to create realistic-sounding conversations for use in social engineering and phishing attacks, such as urging victims to click on malicious links, install malware, or give away sensitive information. The tool also opens up opportunities for more sophisticated impersonation attempts, in which the AI is instructed to imitate a victim’s colleague or family member in order to gain trust. Another attack vector might be to use machine learning to generate large volumes of automated, legitimate-looking messages to spam victims and steal personal and financial information. These kinds of attacks can be highly detrimental to businesses. For example, a payroll diversion Business Email Compromise (BEC) attack, composed of impersonation and social engineering tactics, can have huge financial, operational, and reputational consequences for an organisation 


‘Most web API flaws are missed by standard security tests’

APIs can become less of a liability by including security-focused team members during design, encouraging secure coding, conducting regular security tests, and monitoring programming calls for attacks and misuse. Securing web APIs requires a different approach to classic web application security, according to Ball. “Standard web application tests will result in false-negative findings for web APIs,” he explains. “Tools and techniques that are not calibrated specifically to web APIs will miss on nearly all of the common vulnerabilities.” A notable example is a vulnerability in the USPS Informed Visibility API, first reported by security researcher Brian Krebs. The web application was thoroughly tested one month before Krebs reported the data exposure. During testing, tools like Nessus and HP WebInspect were applied generically to the testing targets, and therefore a significant web API vulnerability went undetected. This undiscovered security flaw, in the USPS Informed Visibility API, allowed any authenticated user to obtain access to email addresses, usernames, package updates, mailing addresses, and phone numbers associated with 60 million customers.


Exploring biometrics within payments

It’s an obvious question but despite all the potential benefits of adopting biometric security, the technology still features several vulnerabilities and weak points. First, it cannot be relied upon for a fingerprint scanner or smartphone camera to be available at every transaction. While consumers can use biometric authorization on most mobile devices, desktops still make up a large portion of eCommerce sales. Additionally, companies will need to adopt hardware capable of reading and interpreting this data to accept biometric payments. The price of this hardware could be cost-prohibitive, depending on what is needed and how far a company wants to take contactless payments. Finally, we cannot forget the consumer factor. They are more anxious about their privacy and where personal data goes than ever before. Even if biometric scans do not actually save or store their biometric information, many consumers might still refuse to provide these identifiers.


Building resilience in a polycrisis world

Seeing and responding to risk differently first requires leaders to clearly pinpoint where plausible risks could materialize and do the most damage to key operations and services. This can be tricky if companies have traditionally approached risk in a siloed way. Company leaders should spend time with one another to work through what if? scenarios, with an eye toward highlighting where exactly in the business a problem or failure would be most catastrophic to customers. ... Now that the executives had their focus—the outcome of getting cash—they could begin looking at all the ways customers do so, including ATMs and the workers who service them, brick-and-mortar banks, and the tech and third parties that help with electronic transfer payments and build resilience across all functions, rather than focusing on individual mechanisms. Prioritization exercises also help leaders tease out false assumptions. Leaders at a UK housing management company had believed that collecting rents via the company’s app was the key to business continuity.


Field-Programmable Qubit Arrays: The Quantum Analog of FPGAs

FPQAs make quantum algorithms more resource-efficient by reducing qubit and gate overhead. The ability to quickly update the qubit layout and connectivity enables rapid testing, benchmarking and optimization of algorithms—in a way, delivering a customized computer for each calculation. One example of how FPQAs can be used to achieve better quantum computing performance is optimization. Many optimization problems can be described mathematically in terms of graphs. The nodes describe the variables in the optimization problem and the edges can represent various relationships between them. For instance, the nodes can describe the potential location of 5G towers, and edges describe pairs of towers that cannot be simultaneously operated without generating interference. In another, more abstract representation, each node can be a stock, and an edge between two nodes denotes that these stocks are correlated. These graphs can be mapped onto analog FPQAs by assigning each node to a qubit and setting the connectivity so that two qubits interact if the corresponding atoms have an edge.


CISA director urges tech industry to take responsibility for secure products

Accepting the continued use of unsafe technology products presents a greater risk to the nation than the Chinese spy balloon that was shot down off the coast of South Carolina and cannot be allowed to continue, Easterly said. “By design, we’ve normalized the fact that technology products are released to market with dozens, hundreds or thousands of defects — such poor construction would be unacceptable in any critical field,” she said during the address. The burden for cybersecurity has disproportionately been placed on consumers and small organizations who are least aware of the threats or able to protect themselves. Easterly said no one would be expected to go out and buy a car that lacked seat belts and air bags as standard features, and nobody should be expected to go out and pay additional money for secure technology products. Government can advance legislation to prevent technology companies from disclaiming liability by establishing higher standards of care, Easterly said.


Cybersecurity in wartime: how Ukraine's infosec community is coping

Defending organizations during an ongoing war put Cossack Labs' cybersecurity experts on an accelerated learning path, says Pilyankevich's colleague Anastasiia Voitova, head of customer solutions. "What I learned is that the priorities are very different from peacetime," she says. "The risks are different; the threats are very different. We have this real enemy. It's not textbook security. No. These are real issues, and we need to build real mitigation to these real issues." One could easily fall into the trap of creating systems that use the highest possible level of security, but Voitova believes this can be a mistake because a system that's too paranoid won't be usable. "This trade-off drama of how to balance security and usability, right now, can cost you even more because if you create a super secure system, but no one will use it, it will lead people to adopt insecure methods," she says. "And if insecure messages are intercepted, people might be injured." Such mistakes are more likely to occur as the war continues and users face prolonged stress and tiredness.


The CIO’s new C-suite mandate

Executives who used to stay in their own lane now find themselves needing closer alignment with one another to manage economic uncertainty, explosive growth, and digital and business transformations, and CIOs have become central figures as business strategists and changemakers. This new C-suite dynamic requires three big shifts to be successful, according to Dan Roberts, CEO of Ouellette & Associates Consulting. CIOs must change the narrative of their relationship with their counterparts, they must prepare their IT teams to deliver on the new narrative, and they must convince the C-suite to share the technology load. It’s a tall order for sure. “I would say just 10% to 15% [of C-suite relationships] are healthy and thriving and are in the trenches together with shared ownership and accountability,” Roberts says. But those CIOs who can look across the enterprise and find new ways to drive revenue or better orchestrate the customer experience and then can communicate and sell their vision to their C-suite counterparts are at the high end of the maturity curve, he adds.



Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford