Daily Tech Digest - June 13, 2023

AI and tech innovation, economic pressures increase identity attack surface

In the new attack observed by Microsoft, the attackers, which the company track under the temporary Storm-1167 moniker, used a custom phishing toolkit they developed themselves and which uses an indirect proxy method. This means the phishing page set up by the attackers does not serve any content from the real log-in page but rather mimics it as a stand-alone page fully under attackers' control. When the victim interacts with the phishing page, the attackers initiate a login session with the real website using the victim-provided credentials and then ask for the MFA code from the victim using a fake prompt. If the code is provided, the attackers use it for their own login session and are issued the session cookie directly. The victim is then redirected to a fake page. This is more in line with traditional phishing attacks. "In this AitM attack with indirect proxy method, since the phishing website is set up by the attackers, they have more control to modify the displayed content according to the scenario," the Microsoft researchers said.


Revolutionizing DevOps With Low-Code/No-Code Platforms

With non-IT professionals developing applications, there is a higher risk of introducing vulnerabilities that could compromise the security of the application and the organization. Additionally, the lack of oversight and governance could lead to poor coding practices and technical debt. For instance, the use of new-generation iPaaS platforms by citizen integrators has made it difficult for security leaders to have full visibility into the organization’s valuable assets. Attackers are aware of this and have already taken advantage of improperly secured app-to-app connections in recent supply chain attacks, such as those experienced by Microsoft and GitHub. ... As organizations try to integrate low-code and no-code applications with legacy systems or other third-party applications, technical challenges can arise. For example, if an organization wants to integrate a low-code application with an existing ERP system, it may face challenges in terms of data mapping and synchronization. Some low-code and no-code applications are built to export data and share it well, but when it comes to integrating event triggers, business logic, or workflows, these software solutions hit limits. 


Rethinking AI benchmarks: A new paper challenges the status quo of evaluating AI

One of the key problems that Burnell and his co-authors point out is the use of aggregate metrics that summarize an AI system’s overall performance on a category of tasks such as math, reasoning or image classification. Aggregate metrics are convenient because of their simplicity. But the convenience comes at the cost of transparency and lack of detail on some of the nuances of the AI system’s performance on critical tasks. “If you have data from dozens of tasks and maybe thousands of individual instances of each task, it’s not always easy to interpret and communicate those data. Aggregate metrics allow you to communicate the results in a simple, intuitive way that readers, reviewers, or — as we’re seeing now — customers can quickly understand,” Burnell said. “The problem is that this simplification can hide really important patterns in the data that could indicate potential biases, safety concerns, or just help us learn more about how the system works, because we can’t tell where a system is failing.”


A Practical Guide for Container Security

Developers and DevOps teams have embraced the use of containers for application deployment. In a report, Gartner stated, "By 2025, over 85% of organizations worldwide will be running containerized applications in production, a significant increase from less than 35% in 2019." On the flip side, various statistics indicate that the popularity of containers has also made them a target for cybercriminals who have been successful in exploiting them. According to a survey released in a 2023 State of Kubernetes security report by Red Hat, 67% of respondents stated that security was their primary concern when adopting containerization. Additionally, 37% reported that they had suffered revenue or customer loss due to a container or Kubernetes security incident. These data points emphasize the significance of container security, making it a critical and pressing topic for discussion among organizations that are currently using or planning to adopt containerized applications.


6 finops best practices to reduce cloud costs

Centralizing cloud costs from public clouds and data center infrastructure is a key finops concern. The first thing finops does is to create a single-pane view of consumption, which enables cost forecasting. Finops platforms can also centralize operations like shutting down underutilized resources or predicting when to shift off higher-priced reserved cloud instances. Platforms like Apptio, CloudZero, HCMX FinOps Express, and others can help with shift-left cloud cost optimizations. They also provide tools to catalog and select approved cloud-native stacks for new projects. ... “Today’s developers now have a choice between monolithic cloud infrastructure that locks them in and choosing to assemble cloud infrastructure from modern, modular IaaS and PaaS service providers,” says Kevin Cochrane, chief marketing officer of Vultr. “By choosing the latter, they can speed time to production, streamline operations, and manage cloud costs by only paying for the capacity they need.” As an example, a low-usage application may be less expensive to set up, run, and manage on AWS Lambda with a database on AWS RDS, rather than running it on AWS EC2 reserved instances.


Artificial Intelligence: A Board of Directors Challenge – Part II

It is essential for organizations to dedicate time and effort to consider the potential unintended consequences or “unknown unknowns” of AI deployments. This will prevent adverse outcomes that may arise if AI is deployed without proper consideration. To achieve this, it is necessary to understand the Rumsfeld Knowledge Matrix. The Rumsfeld Knowledge Matrix is a conceptual framework introduced by Donald Rumsfeld, the former United States Secretary of Defense, to categorize and analyze knowledge and information based on different levels of certainty and awareness. The matrix consists of four quadrants: Known knowns: These are things that we know and are aware of. They represent information that is well understood and can be easily articulated. I call these “Facts.” Known unknowns: These are things that we know we don’t know. In other words, there are gaps in our knowledge or information which we are aware of and recognize as areas where further research or investigation is needed. We need to ask These ” Questions “


How to achieve cyber resilience?

Instead of relegating security development to a forgettable annual calendar reminder, a continuous approach must keep security at the forefront of mind throughout the year. Security threats also need to be brought to life with realistic simulation exercises. This approach will provide a much more engaging experience for participants and a far more accurate indication of their abilities. Real-life exercises give far more insight into an individual’s mindset and potential than a certification’s often rote, static nature. Security teams must be ready to respond rapidly and confidently to the latest emerging threats, aligned with industry best practices. They must have the right skills, from closing off newly discovered zero days, to mitigating serious incoming threats like attacks exploiting Log4Shell. But they must also be able to apply them calmly and in control even if they face a looming crisis. This capability can only be developed through continuous exercise.


The IT talent flight risk is real: Are return-to-office mandates the right solution?

Most workers require location flexibility when considering a job change. In addition, most workers in an IT function would only consider a new job or position that allows them to work from a location of their choosing. Requiring employees to return fully on-site is also a risk to DEI. Underrepresented groups of talent have seen improvements in how they work since being allowed more flexibility. For example, most women who were fully on-site prior to the pandemic, but have been remote since, report their expectations for working flexibly have increased since the beginning of the pandemic. Employees with a disability have also found a vast improvement to the quality of their work experience. Since the pandemic, Gartner research shows that knowledge workers with a disability have found the extent to which their working environment helps them be productive has improved. In a hybrid environment for this population, perceptions of equity have also improved, as they have experienced higher levels of respect and greater access to managers.


Common Cybersecurity Risks to ICS/OT Systems

Protecting ICS/OT systems from cyberthreats is crucial for ensuring the resilience of critical infrastructure. Recent cyberattacks on ICS/OT systems have highlighted the potential impact of these attacks on critical infrastructure and the need for organizations to prioritize cybersecurity for their ICS/OT systems. By being aware of common cybersecurity risks and taking proactive steps to mitigate them, organizations can protect their ICS/OT systems and maintain operational resilience. The above-mentioned incidents demonstrate that cyberattacks on ICS/OT systems can cause physical harm, financial losses and public safety risks. Organizations must protect their ICS/OT systems from cyberthreats, such as conducting regular vulnerability assessments, implementing network segmentation and providing employee training on cybersecurity best practices. Compliance with relevant regulations and standards and collaboration between IT and OT teams can also help mitigate cybersecurity risks to ICS/OT systems.


10 emerging innovations that could redefine IT

The most common paradigm for computation has been digital hardware built of transistors that have two states: on and off. Now some AI architects are eyeing the long-forgotten model of analog computation where values are expressed as voltages or currents. Instead of just two states, these can have almost an infinite number of values, or at least as much as the precision of the system can measure accurately. The fascination in the idea comes from the observation that AI models don’t need the same kind of precision as, say, bank ledgers. If some of the billions of parameters in a model drift by 1%, 10% or even more, the others will compensate and the model will often still be just as accurate overall. ... The IT department has a big role in this debate as it tests and deploys the second and third generation of collaboration tools. Basic video chatting is being replaced by more purpose-built tools for enabling standup meetings, casual discussions, and full-blown multi-day conferences. The debate is not just technical. Some of the decisions are being swayed by the investment that the company has made in commercial office space.



Quote for the day:

"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong

Daily Tech Digest - June 12, 2023

Cloud-Focused Attacks Growing More Frequent, More Brazen

One key finding is that hackers are becoming more adept — and more motivated — in targeting enterprise cloud environments through a growing range of tactics, techniques and procedures. These include deploying command-and-control channels on top of existing cloud services, achieving privilege escalation, and moving laterally within an environment after gaining initial access. ... While attack vectors and methods are increasingly varied, they often rely on some common denominators, including the oldest one around: human error. For example, 38% of observed cloud environments were running with insecure default settings from the cloud service provider. Indeed, cloud misconfigurations are one of the major sources of breaches. Similarly, identity access management (IAM) is another huge area of risk rife with human error. In two out of three cloud security incidents observed by CrowdStrike, IAM credentials were found to be over-permissioned, meaning the user had higher levels of privileges than necessary.


Enterprise Architecture Maturity Model – a Roadmap for a Successful Enterprise

Assessment is the evaluation of the EA practice against the reference model. It determines the level at which the organization currently stands. It indicates the organization’s maturity in the area concerned, and the practices on which the organization needs to focus to see the greatest improvement and the highest return on investment. ... Development of the EA is an ongoing process and cannot be delivered overnight. An organization must patiently work to nurture and improve upon its EA program until architectural processes and standards become second nature and the architecture framework and the architecture blueprint become self-renewing. Maturity assessment is a standard business tool to understand the maturity level of the organization. An EAM Assessment Framework comprises a maturity model with different maturity levels and a set of elements, which are to be assessed, methodology and a toolkit for assessment (questionnaires, tools, etc.). The outcome is a detailed assessment report, which describes the maturity of the Organization, as well as the maturity against each of the architectural elements.


European Commission Wants Labels on AI-Generated Content -- Now

The regulatory push might lead to deeper scrutiny of where AI-generated content comes from, down to its data sources. Jan Ulrych, vice president of research and education at Manta, favors the efforts the EU is taking to regulate this space. Manta is a provider of a data lineage platform that offers visibility to data flows, and the company sees data lineage as a way to fact-check AI content. Ulrych says when it comes to news content, there does not seem to be an effective method in place yet to validate or make sources transparent enough for fact-checking in real-time, especially with the AI’s ability to spawn content. “AI sped up this process by making it possible for anyone to generate news,” he says. It is almost a given that generative AI will not disappear because of regulations or public outcry, but Ulrych sees the possibility of self-regulation among vendors along with government guardrails as healthy steps. “I would hope, to a large degree, the vendors themselves would invest into making the data they’re providing more transparent,” he says.


Finding The Right Size of a Microservice

Determining the right level of granularity — the size of the service — is one of the many hard parts of a microservices architecture that we as developers struggle with. Granularity is not defined by the number of classes or lines of code in a service, but rather by what the service does — hence, there is this conundrum to getting service granularity right. ... Since we are living in the era of micro-services and nano-services, most development teams do mistakes by breaking services arbitrarily and ignoring the consequences that come with it. In order to find the right size, one should carry out the trade-off analysis on different parameters and make a calculated decision on the context and boundary of a microservice. ... The scope and function mainly depend on two attributes — first is cohesion, which means the degree and manner to which the operation of a particular service interrelate. The second is the overall size of a component, measured usually in terms of the number of responsibilities, the number of entry points into the service, or both.


What is Web3 decentralized cloud storage?

Web3 storage is, as the name suggests, decentralised, meaning the data is held across multiple repositories. If a government agency, or hacker, wanted to obtain confidential data, there’s no single location to raid. Unless granted the user’s keys, there’s no way to unlock data held on Web3 storage. Security and privacy are guaranteed. ‘For a company looking for resilient, low cost, and predictable storage … Web3 storage is now undeniably a viable – if still unusual – proposition’ Web3 cloud storage scales well. Local storage can run out, but with Web3 there is always room for more (even if you may have to pay to access the extra space). “It can also scale horizontally, accommodating the increasing demand for data storage without centralised bottlenecks,” says Servadei. Access speeds are acceptable. “It’s going to be slower than you’d have a normal hard-drive or CD. But it stores data the same way Amazon S3 stores data.” Decentralised storage also a more permanent way to store files. Hosting sites don’t last forever. Anyone wanting to access historic websites on Geocities or 4sites or Xanga will know the annoyance of web hosts going bust. Link rot is a curse of the internet.


To solve the cybersecurity worker gap, forget the job title and search for the skills you need

Steven Sim, CISO for a global logistics company and a member of the Emerging Trends Working Group with the IT governance association ISACA, has adopted this thinking. ... “They may not have the relevant [security] certification, but they have the domain knowledge,” he says, pointing out that OT security has some requirements that differ from IT security which makes that OT background particularly valuable on his team. Sim says he looks for “a passion and keenness to learn” in such candidates. He also looks for candidates who demonstrate ownership of their work, a high degree of integrity, a willingness to collaborate, and a “risk-based mindset.” Sim then upskills such hires by having them receive on-the-job training and earn security certifications. Moreover, he says drawing workers from OT helps create more collaboration with the function and ultimately more secure OT operations. He says that result has helped get OT leaders onboard with his recruiting efforts, adding that they see it as a “symbiotic win-win relationship.”


Innovation without disruption: virtual agents for hyper-personalized customer experience (CX)

VAs help “hold the fort” on routine calls so live agents can focus more on complicated interactions, but they’re smart enough to handle certain complexities on their own. They can effortlessly navigate topics, handle a wide range of questions, and seamlessly operate across multiple channels. The technology also grows in intelligence with use, allowing VAs to act with greater – comparably humanlike – awareness. For example, you might present a customer with a choice of channels for engagement such as chat, phone, and social media. After communicating with the customer, your VA can default to that person’s preferred channel for future conversations. ... VAs can hyper-personalize even routine interactions. Let’s say a customer initiates a chat session with a VA for resetting a forgotten password. The VA can ask the customer if they would like to switch to text messaging for a more effective multimedia experience. If the customer accepts, the chat session will end and the VA will seamlessly switch to SMS.


Building a secure coding philosophy

Discussing secure coding, Læarsson says: “From criteria’s definition through coding and release – our quality assurance processes include both automated and manual testing, which helps us ensure that we push and maintain high standards with every application and update we do. The software we develop is tested for both functional and structural quality standards – from how effectively applications adhere to the core design specifications, to whether it meets all security, accessibility, scalability and reliability standards.” Peer review is used to run an in-depth technical and logical line-by-line review of code to ensure its quality. Within the National Digitalisation Programme, Læarsson says: “Our low-code development projects are divided into scrum teams, where each team creates stories and tasks for each sprint and defines specific criteria for these.” These stories enable people to understand the role of a particular piece of software functionality. “When stories are done, they are tested by the same analysts who have specified the stories. 


UK Takes the First Step to Stop Authorized Payment Scams

The U.K.'s Payment Systems Regulator said fighting APP scams requires taking an ecosystem-level approach. Fraudsters are specifically targeting faster payment services because of the speed of transactions, so financial institutions need to be confident that they can authorize payments between each other, no matter what the channel. Consumers and businesses have always trusted banks to provide expertise and capabilities they do not possess themselves. They want to know that their bank is doing everything it can to protect them from scammers. Ken Palla, retired director of MUFG Bank, said the regulator has put together a very detailed and complete document. "It is clear what is included in the policy statement and what is excluded. The PSR wants payment firms to take responsibility for protecting their customers at the point a payment is made. In doing so, it expects the new reimbursement requirement to lead firms to innovate and develop effective, data-driven interventions to change customer behavior."


Building a culture of security awareness in healthcare begins with leadership

A well-tailored security program must be just that: tailored. Many security legal frameworks are moving from specificity in controls towards a discretionary-based approach. This “discretionary” standard is interpreted by governing bodies that interpret the leading-edge developments in the industry. An organization must trace what data is stored or processed and ensure security controls are mapped internally to an organization and externally across vendors. Healthcare organizations must dedicate time to ensure appropriate administrative, technical, and physical controls are in place at the organization and its vendors to protect data stored and processed. The saying “one size fits all” is never true for how a security program is administered and applied in the healthcare technology industry, or any other industry. However, the fundamental principles are the same: understanding what data is processed by an organization, identifying true risks (internal and external) to the data, evaluating the impacts of those risks, and whether existing controls are adequate to reduce those risks to an acceptable standard.



Quote for the day:

"The key to being a good manager is keeping the people who hate me away from those who are still undecided." -- Casey Stengel

Daily Tech Digest - June 11, 2023

Tips Every CFO Should Consider For Implementing Tech Solutions

Conduct a cost assessment to pinpoint areas where tech upgrades may be needed and determine if these upgrades will add value to your financial operations. Remember, newer doesn’t necessarily mean better. Therefore, you must invest in tech solutions and upgrades that improve efficiency across the board. By taking the initiative and identifying areas where tech solutions can solve specific pain points, CFOs can help ensure a seamless transition when implementing new technology. ... While many organizations today jump at the opportunity to implement updated solutions to replace legacy systems, an overhaul doesn’t have to be made just because new technologies become available. ... The key is fully understanding why you’re switching to and implementing new technology. Just because certain tasks and processes can be done using advanced tech tools doesn’t necessarily mean your company needs new software.


The power of data management in driving business growth

Effective data management means business leaders can stay abreast of the ever-surging tide of data, as well as deploying new services quickly, and scaling faster. It can deliver insights which lead to new business streams or even the reinvention of the entire company. Data management comes in multiple forms, encompassing both hardware and software. Solutions include unified storage, which enables organisations to run and manage files and applications from a single device, and storage-area networks (SANs), offering network access to storage devices. ... As well as data management, the Data Leaders thrive in two other key areas: data analytics and data security. These three elements are interdependent. Data management naturally works hand-in-hand with data analytics, and data security is increasingly important as business leaders hope to share data with partners securely. It’s impossible for leaders to thrive when it comes to data management if they haven’t harnessed data security, or to adopt data analytics without mastering data management. 


Zero Trust: Beyond the Smoke and Mirrors

Despite misleading marketing, a lack of transparency into the available technologies, the limited scope of the technologies themselves, mounting privacy concerns, as well as a complete question mark when it comes to price and deployment, trust in zero trust remains. Organizations know they need to embrace it– and preferably yesterday. ... Despite this enhanced savviness and market maturity around zero trust, major barriers to implementation remain. These include:Damn you, marketers. Some vendors may use misleading marketing tactics to promote their zero-trust solutions, overstating their capabilities or making false claims about their performance. See through the noise the best you can. Most tools let you test things out first. Take vendors up on that. What the hell does this cost? Implementing zero trust security solutions can be expensive, especially for organizations with large IT infrastructures. Chances are, the more devices, networking gear, locations, and compliance standards you need to adhere to…the more this will cost. Complexity is almost always guaranteed. Zero trust can also be complex to deploy, especially across distributed, multi-vendor networks.


Technical Debt is Inevitable. Here’s How to Manage It

Technical debt is a threat to innovation, so how can we mitigate it? Well, if you don’t already do so, it’s a good idea to build technical debt into your budgeting, planning and ongoing operations, said Orlandini. “You have to manage it, expect it and be responsible with your technical stacks in the same way you are responsible with your financial stacks,” he said. Here are a few other ways to manage the debt you have and avoid accumulating more. Consider using AI to refactor legacy code. Generative AI could be leveraged to reactor legacy code into more modern programming languages. This could help automatically convert PEARL code, for instance, into JavaScript. Today’s large language models (LLMs) could help solve many of today’s problems. However, since they are built on a pre-existing body of work, they will use less trendy languages and might cause some technical debt in the process, cautioned Orlandini. Don’t over-rely on new DevOps processes as a cure-all. DevOps can accelerate the time to release features, but it does not, by its nature, eliminate technology changes, said Orlandini.


Cloud repatriation and the death of cloud-only

IT analyst firm IDC told us that its surveys show repatriation as a steady trend ‘essentially as soon as the public cloud became mainstream,’ with around 70 to 80 percent of companies repatriating at least some data back from public cloud each year. “The cloud-first, cloud-only approach is still a thing, but I think it's becoming a less prevalent approach,” says Natalya Yezhkova, research vice president within IDC's Enterprise Infrastructure Practice. “Some organizations have this cloud-only approach, which is okay if you're a small company. If you're a startup and you don't have any IT professionals on your team it can be a great solution.” While it may be common to move some workloads back, it’s important to note a wholesale withdrawal from the cloud is incredibly rare. ... “They think about public cloud as an essential element of the IT strategy, but they don’t need to put all the eggs into one basket and then suffer when something happens. Instead, they have a more balanced approach; see the pros and cons of having workloads in the public cloud vs having workloads running in dedicated environments.”


5 Ways to Implement AI During Information Risk Assessments

The problem is that there is no such thing as a perfectly secure system; there will always be vulnerabilities that an IT team is unaware of. This is why IT teams perform regular penetration tests – simulated attacks to test a system’s security. ... By turning this task over to AI, companies can run automated penetration tests at any time. These AI models can work in the background and provide immediate alerts the moment a vulnerability is found. Better still, the AI can classify vulnerabilities based on the threat level, meaning if there’s a vulnerability that could allow for a system-wide infiltration, then that vulnerability will be prioritized above lesser threats. ... AI-powered predictive analytics can be an incredibly powerful tool that allows an organization to estimate the results of a marketing campaign, a customer’s lifetime value, or the impact of a looming recession. But predictive analytics can also be used to predict the likelihood of a future data breach.


13 Cloud Computing Risks & Challenges Businesses Are Facing In These Days

Starting with one of the major findings of this report, we can see that both enterprises and small businesses cite the ability to manage cloud spend as the biggest challenge, overtaking security concerns after a decade in place one. This can be the consequence of economic volatility, where organizations keep spending and innovating with multiple cloud services to keep up with the digital world in an unstable environment. ... Proper IT governance should ensure IT assets are implemented and used according to agreed-upon policies and procedures, ensure that these assets are properly controlled and maintained, and ensure that these assets are supporting your organization’s strategy and goals. In today’s cloud-based world, IT does not always have full control over the provisioning, de-provisioning, and operations of infrastructure. This has increased the difficulty for IT to provide the governance, compliance, risks, and data quality management required. To mitigate the various risks and uncertainties in transitioning to the cloud, IT must adapt its traditional IT control processes to include the cloud. 


When are containers or serverless a red flag?

Limited use cases mean that containers and serverless technologies are well-suited for certain types of applications, such as microservices or event-driven functions. But they do not apply to everything new. Legacy applications or other traditional systems may require significant modifications or restructuring to run effectively in containers or serverless environments. Of course, you can force-fit any technology to solve any problem, and with enough time and money, it will work. However, those “solutions” will be low-value and underoptimized, driving more spending and less business value. Complexity is a common downside of most new technology trends. Container and serverless platforms introduce additional complexity that the teams building and operating these cloud-based systems must deal with. Complexity usually means increased development and maintenance costs, less value, and perhaps unexpected security and performance problems. This is on top of the fact that they just cost more to build, deploy, and operate.


Vector Databases: What Devs Need to Know about How They Work

Unsurprisingly, a vector database deals with vector embeddings. We can already perceive that dealing with vectors is not going to be the same as just dealing with scalar quantities. The queries we deal with in traditional relational tables normally match values in a given row exactly. A vector database interrogates the same space as the model which generated the embeddings. The aim is usually to find similar vectors. So initially, we add the generated vector embeddings into the database. As the results are not exact matches, there is a natural trade-off between accuracy and speed. And this is where the individual vendors make their pitch. Like traditional databases, there is also some work to be done on indexing vectors for efficiency, and post-processing to impose an order on results. Indexing is a way to improve efficiency as well as to focus on properties that are relevant in the search, paring down large vectors. Trying to accurately represent something big with a much smaller key is a common strategy in computing; we saw this when looking at hashing.


Understanding Data Mesh Principles

When an organization embraces a data mesh architecture, it shifts its data usage and outcomes from bureaucracy to business activities. According to Dehghani, four data mesh principles explain this evolution: domain-driven data ownership, data as a product, self-service infrastructure, and federated computational governance. ... The self-service infrastructure as a platform supports the three data mesh principles above: domain-driven data ownership, data as a product, and federated computational governance. Consider this interface an operating system where consumers can access each domain’s APIs. Its infrastructure “codifies and automates governance concerns” across all the domains. According to Dehghani, such a system forms a multiplane data platform, a collection of related cross-functional capabilities, including data policy engines, storage, and computing. Dehghani thinks of the self-service infrastructure as a platform that enables autonomy for multiple domains and is supported by DataOps.



Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox

Daily Tech Digest - June 10, 2023

Vetting an Open Source Database? 5 Green Flags to Look for

There’s an important difference between offerings that are legitimate open source versus open source-compatible. “Captive” open source solutions pose as the original open source solution from which they originated, but in reality, they are merely branches of the original code. This can result in compromised functionality or the inability to access features introduced in newer versions of the true open source solution, as the branching occurred prior to the introduction of those features. “Fake” open source can feature restrictive licensing, a lack of source code availability and a non-transparent development process. Despite this, these solutions are sometimes still marketed as open source because, technically, the code is open to inspection and contributions are possible. But when it comes down to it, the license is held by a single company, so the degree of freedom is minute compared to that of actual open source. The key is to minimize the gap between the core database and its open source origins.


Zero trust and cloud capabilities essential for data management in enterprises

The challenge, however, lies in implementing a complete solution guided by the seven pillars of Zero Trust. No company can do this alone. To help private and public sector organizations simplify adoption, Dell is building a Zero Trust ecosystem. It brings together more than thirty leading technology and security companies to create a unified solution across infrastructure platforms, applications, clouds, and services. PowerStore has always had a strong “security DNA,” safeguarding data with advanced capabilities like hardware root of trust, data-at-rest encryption and AIOps security analytics. As with everything about the platform, the focus is simplicity and automation – delivering “always on” protection without increasing management complexity or relying on human vigilance to be effective. In 2023, the newest PowerStoreOS release adds even more cybersecurity features to meet the stringent requirements, while also enabling an authentic Zero Trust experience for business solutions.


Expecting Too Much From CISOs Can Drive Them Out The Door

“The CISO is there to raise the risk, to shine light on it, to offer solutions, to differentiate and prioritize what needs to be fixed,” he explained. “You can’t ask the CISO to do everything and everything; you need to give them the support — and give them a team that can really make sure the cybersecurity and risk management program is well-functioning.” Expecting too much from CISOs — as so many company boards still do — continues to drive attrition from the security function at a brisk pace, with burnout and the desire for greener pastures pushing 24 percent of Fortune 500 CISOs to switch roles within a year of starting. ... The increasing complexity of the modern cybersecurity defense has dovetailed with the rapid expansion of managed service providers like eSentire, whose ability to offer the full breadth of security capabilities — and to do so confidently enough to offer guarantees like four-hour response times for remote threat suppression — puts them well ahead of anything the average corporate information security department can provide.


SRE Brings Modern Enterprise Architectures into Focus

If the business commitment is that users will reliably have enough light to see what they are doing (service level), an SLO could be that one brightly lit lamp (availability) is maintained for every 10 square feet of space. ... In application delivery systems these could look like CPU utilization, API call and database query time, etc. It’s up to the site reliability engineers to define the SLI measures that impact the business SLOs and what responses will be taken when they fall below specific thresholds by adjusting operating policies and configuration. ... The measures, thresholds, and responses are the intersection of SRE with the other domains of a modern enterprise architecture designed for the application delivery of a digital business. Operational data—telemetry—feeds the observability of the defined measures and thresholds set by SRE. Automation is the combined application of tools, technologies, and practices to enable site reliability engineers to scale defined responses with less toil, thus enabling the efficient satisfaction of the SLOs of a digital service. 


What LOB leaders really think about IT: IDC study

For many IT leaders, turning that tide may require a new approach. CIOs can demonstrate their value to the business and earn that seat at the table by tying what they do to business goals, Thomson suggested. “One of the biggest challenges that IT people have is being able to communicate their business value in a language that the business understands,” she said. “Talking in business outcomes is the currency that enables IT to gain trust and show the value that they’re delivering.” In addition to mastering business concepts and taking steps to prove the value of IT, CIOs who are succeeding at this are putting in place seamless teams where there’s no wall between IT and the business, she said. “It’s just seen as one cross-functional team where everybody understands the common goal that is driving all the business decisions.” Such strategic maneuvers are essential to becoming a digital business, one where value creation is based on and dependent on the use of digital technologies, from how processes are run to the products, services, and experiences it provides, Thomson said.


Microsoft commits to supporting customers on their responsible AI journeys

The commitments include sharing Microsoft's expertise while teaching others to develop AI safely, establishing a program to ensure AI applications are created to follow legal regulations, and pledging to support the company's customers in implementing Microsoft's AI systems responsibly within its partner ecosystem. "Ultimately, we know that these commitments are only the start, and we will have to build on them as both the technology and regulatory conditions evolve," Cook wrote in the statement shared by Microsoft. Though the company only recently developed its Bing Chat generative AI tool, Microsoft will start by sharing key documents and methods that detail the company's expertise and knowledge gained since beginning its journey into AI years ago. The company will also share training curriculums and invest in resources to teach others how to create a culture of responsible AI use within organizations working with the technology. Microsoft will establish an "AI Assurance Program" to leverage its own experiences and apply the financial services concept called "Know your customer" to AI development.


Data Privacy Standard Contractual Clauses Called Into Question After Meta Ireland Fine

Although this decision deals a particularly large blow to Meta, all entities relying upon SCCs to complete data transfers from the EU to the U.S. are now affected. Due to the continued and wide-reaching effects of the U.S.’s strategy on surveillance, we’ve now entered yet another period of uncertainty, and the ability to lawfully transfer personal data into the U.S. from the EU and United Kingdom is again in question. ... As a remedy, the DPC has given Meta five months to suspend all transfers of personal data to the U.S., bring its processing activities into compliance with EU law, and delete any EU personal data that been transferred unlawfully under this decision. The EU has long struggled with how to regulate EU personal data transfers to the U.S. After the invalidation of the U.S.-EU Safe Harbor Agreement and the U.S.-EU Privacy Shield in the Schrems I & Schrems II decisions, entities including Meta have mostly relied on SCCs to lawfully transfer EU personal data into the U.S. where U.S. laws are considered to provide substantially less protection.


5 Critical Data Governance Truths Every Data Leader Should Be Aware Of

Implementing a comprehensive data governance program comes with a significant price tag. As a result, firms can easily spend over US$1 million annually just on resources to maintain data integrity. However, the risks associated with poor data governance are many, for instance, reputational damage, lost revenue, and more. Therefore, making decisions based on inaccurate data is costly, leading to poor business outcomes. ... Data governance is misunderstood to be solely about data. However, it's vital to understand data governance is about components, each playing a crucial role in ensuring data is managed effectively and efficiently. ... A good data governance program is one with KPIs. The KPIs should be specific, measurable, and understandable by everyone in the organization. By measuring these KPIs regularly and providing timely feedback, managers can determine whether their efforts are paying off or not. They can also communicate value metrics to key executives.


CDEI publishes portfolio of AI assurance techniques

The "portfolio of AI assurance techniques" was created to help anyone involved in designing, developing, deploying or otherwise procuring AI systems do so in a trustworthy way, by giving examples of real-world auditing and assurance techniques. “AI assurance is about building confidence in AI systems by measuring, evaluating and communicating whether an AI system meets relevant criteria,” said the CDEI, adding these criteria could include regulations, industry standards or ethical guidelines. “Assurance can also play an important role in identifying and managing the potential risks associated with AI. To assure AI systems effectively we need a range of assurance techniques for assessing different types of AI systems, across a wide variety of contexts, against a range of relevant criteria.” The portfolio specifically contains case studies from multiple sectors and a range of technical, procedural and educational approaches, to show how different techniques can combine to promote responsible AI.


Consolidating your cyber security strategy

From a security perspective, consolidating threat defence into one system means that all devices and endpoints can be set to one standard, minimising the opportunity for weak spots and gaps to appear. In the event of a breach, such as a member of staff clicking a malicious link, an XDR system can isolate the threat to stop it spreading and roll-back the endpoint to a safe state. Although changing cyber security tactics should not be viewed as a cost cutting solution, vendor consolidation can certainly save money. By replacing multiple products that may overlap, reducing the man hours spent monitoring different systems and avoiding the consequences of a successful breach, businesses can get a better return on their investment. Not all XDR systems are the same, and it is important to choose one that best suits the needs of a business. XDR has traditionally only been available for large enterprises. However, finding the right partnership can allow small and medium sized companies to customise the solution to fit their requirements without unnecessary extras.



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - June 09, 2023

Why Protecting Data Centers Requires A Personalized Security Approach

Since each industry has its own unique security and privacy needs, businesses should work with security providers to vet their services and ensure they’re a vertical fit. Beyond HIPAA and PCI, these can also include standards in government like FISMA and FEDRAMP as well as FERPA in education. For businesses in these industries, partnering with a security provider with background in their respective vertical is a must. Security needs vary from data center to data center, so security providers must do a thorough analysis of all potential risks and threats. These solutions providers should ask hard questions of their customers to truly understand the security level needed. Businesses need to be prepared for a worst-case scenario and determine how they can secure customer data in the event of a disruption. If there’s a power outage, how long can they be down for? If they’re a retail business, what’s the impact on the bottom line if an outage happens on Black Friday? How much damage to a business’ reputation will happen if customer information is leaked in a breach? 


ChatGPT’s ‘Perfect Storm’: Managing Risk and Eyeing Transformational Change

At the eye of this storm lies the rapid evolution of ChatGPT’s capabilities, marking the advent of what we refer to as the “Age of AI” or the “Fourth Industrial Revolution.” I shed light on ChatGPT’s transformational capabilities, especially its potential to reshape business operations. In my personal experience, ChatGPT has proven itself valuable in tasks such as drafting initial document versions and creating LinkedIn posts, even suggesting suitable emojis! However, accompanying this storm is a limited understanding of associated risks, further compounded by an absence of a regulatory framework tailored to such advanced AI models and varying levels of organizational preparedness for AI-driven future. ...  It calls for interdisciplinary collaboration involving technological expertise, regulatory compliance, risk management and operational understanding. By ensuring this balanced and holistic approach, organizations can fully exploit the advantages of AI technologies like ChatGPT while mitigating potential risks and pitfalls.


The Six Disruptive Forces That Will Shape Your Business’s Future

Technological advances and digital innovations – the primary driver of growth in the US economy during the past 25 years – will continue to drive new business models and ecosystem relationships. The past few decades witnessed a massive explosion of computing and communications capability, along with the scaling of new business models, and the ability to connect every person on the planet through the internet. The next decade promises even more of this, perhaps exponentially so, driven by technologies such as artificial intelligence, blockchain, 5G networks, and edge computing. ... A proliferation of new communication technologies enabled the widespread adoption of hybrid work models in the wake of the pandemic. Some see this shift in working patterns as an evolutionary step in how work occurs – an incremental change. We see it differently: in our view, remote work represents a step change in how labor markets are organized, raises big questions about productivity, and creates important collateral effects in other areas of the economy.


How to use the new AI writing tool in Google Docs and Gmail

The AI tools in Slides and Sheets are not yet available, but Help Me Write is in limited preview; you can try it out in Google Docs or Gmail on the web by signing up for access to Workspace Labs with your Google account. (You’ll be put on a waitlist before being granted access.) Like the well-known ChatGPT, Help Me Write is a chatbot tool that generates written text based on prompts (instructions) that you give it. Whether you’re a professional writer or someone who dreads having to write for your job, the potential of AI assistance for your writing tasks is appealing. Help Me Write can indeed write long passages of text that are reasonably readable. But its results come with caveats including factual errors, redundancy, and too-generic prose. This guide covers how to use Help Me Write in both Google Docs and Gmail to generate and rewrite text, and how to overcome some of the tool’s shortcomings. Because it’s in preview status, keep in mind that there may be changes to its features, and the results it generates, when it’s finally rolled out to the public.


Winning the Mind Game: The Role of the Ransomware Negotiator

Professional negotiation is the act of taking advantage of the professional communication with the hacker in various extortion situations. The role comprises four key elements:1. Identifying the scope of the event - Takes place within the first 24-48 hours. Includes understanding what was compromised, how deep the attackers are in the system, whether the act is a single, double or triple ransomware, if the attack was financially motivated or if it was a political or personal attack, etc. In 90% of cases, the attack is financially motivated. If it is politically motivated, the information may not be recovered, even after paying the ransom.2. Profiling the threat actor - Includes understanding whether the group is known or unknown, their behavioral patterns and their organizational structure. Understanding who the attacker is influences communication. ... This can be used for improving negotiation terms, like leveraging public holidays to ask for a discount.3. Assessing the "cost-of-no-deal" - Reflecting to the decision makers and the crisis managers what will happen if they don't pay the ransom.


RFI vs. RFP vs. RFQ: What are the differences?

Each document -- a request for information (RFI), a request for proposal (RFP) and a request for quote (RFQ) -- has a distinct purpose when undertaking a significant project, even if some overlap exists. While it's possible to issue all three types of requests for a single project, buying teams will typically only issue one or two of them, given the overlap. ... Software buying teams use a request for information when they want additional information from vendors before finalizing the RFP or RFQ. The buying team may lack clarity on requirements, want more information on available options in the market or need details validated, which the vendors' industry experts can do. ... The RFP will list the requirements in detail, provide a recommended timeline and request pricing from the vendors. The buying team might ask specific questions about the vendor, such as the length of time they've been in business, completion proportion of similar projects, annual sales and number of staff. The RFP response may have mandatory terms to follow, such as a submission due date and other critical information.


Contextual Computing and the Internet of Things: A Perfect Match

The convergence of contextual computing and the IoT is a natural progression, as both technologies rely on data to function effectively. By combining the contextual awareness of AI-powered systems with the vast amounts of data generated by IoT devices, we can create intelligent systems that are capable of making real-time decisions and providing personalized experiences. One of the most significant benefits of this convergence is the ability to create more efficient and sustainable systems. For example, in the realm of energy management, IoT devices can collect data on energy consumption patterns, while contextual computing can analyze this data to identify inefficiencies and suggest improvements. This could lead to the development of smart grids that optimize energy distribution and reduce waste, ultimately contributing to a more sustainable future. Another area where the combination of contextual computing and the IoT can have a significant impact is in healthcare. 


Beyond Requirements: Tapping the Business Potential of Data Governance and Security

The teams responsible for data protection and security have often been pitted against the teams that want to leverage data for business insight. This conflict is unsustainable when the business needs maximum agility to respond to volatile market conditions and unexpected competitive pressures. In fact, the alignment of internal objectives and incentives is an opportunity to accelerate outcomes for the business. ... Functions of data governance, data security and data privacy are becoming increasingly interdependent within the enterprise. Stakeholder communication and collaboration are critical. But in many cases, there is a counterproductive feedback loop inhibiting this critical cultural alignment. Siloed technology often obstructs meaningful interdisciplinary collaboration, which prevents the adoption of more unified supporting technologies. In this sense, both automation and integration should be key areas of technological focus for today’s businesses. Now is the time for change, as many organizations risk falling behind in their data governance and security efforts.


Cybersecurity Pioneer Calls for Regulations to Restrain AI

“We know that you can use deep fakes to do scams or business email compromise attacks or what have you.” Current tools gave criminals and other bad actors the ability to generate unlimited personas, which could be used for multiple types of scams. More broadly, the march of AI also means that whatever can be done purely online can be done through automation and large-scale language models like ChatGPT, he said, which has obvious implications for developers. However, he said, humans are harder to replace where there’s an interface between the real world and online technology. Rather than studying to build software frameworks for the cloud, he said, “You should be studying to build software frameworks for, let’s say, medical interfaces for human health because we still need the physical world. For humans to work with humans to fix their diseases.” Looking slightly further ahead, he said that people who worried about the likes of ChatGPT becoming too good, or achieving AGI, “haven’t paid attention”, as that is precisely what the declared goal of OpenAI is.


The steep cost of a poor data management strategy

For many organizations, the real challenge is quantifying the ROI benefits of data management in terms of dollars and cents. Unlike other business investments, the returns may not be immediately apparent because the benefits accrue over time. This places a major focus on the initial investment instead of the potential outcomes and ROI, often disguising data management’s incredible value. Let’s look at how we can resolve this—while there is still time to do so. Regardless of your industry, data is central to almost every business today. Leveraging that data, in AI models, for example, depends entirely on the accessibility, quality, granularity, and latency of your organization’s data. Without it, organizations incur a significant opportunity cost. A few years ago, Gartner found that “organizations estimate the average cost of poor data quality at $12.8 million per year.’” Beyond lost revenue, data quality issues can also result in wasted resources and a damaged reputation. 



Quote for the day:

"Even the demons are encouraged when their chief is "not lost in loss itself." -- John Milton

Daily Tech Digest - June 08, 2023

5 Reasons Why IT Security Tools Don't Work For OT

While IT and OT both seek to ensure confidentiality (the protection of sensitive data and assets), integrity (the fidelity of data over its lifecycle), and availability (the accessibility and responsiveness of resources and infrastructure), they prioritize different pieces of this CIA triad.IT's highest priority is confidentiality. IT deals in data, and the stakeholders of IT concern themselves with protecting that data — from trade secrets to the personal information of users and customers. OT's highest priority is availability. OT processes operate heavy-duty equipment in the physical realm, and for them, availability means safety. Downtime is simply untenable when shutting off a blast furnace or industrial boiler tank. For the sake of availability and responsiveness, most OT components weren't built to accommodate security implementations at all. ... Almost all IT-based tools require downtime for installation, updates, and patching. These activities are generally a non-starter for industrial environments, no matter how significant a vulnerability may be. Again, downtime for OT systems means putting safety at risk.


Oshkosh CIO Anu Khare on IT’s pursuit of value

VSP stands for value, strategic fit, and passionate sponsor. The framework ties to my fundamental philosophy of letting cost, value, and the customer decide what is valuable and what is not valuable for our customers. We didn’t start with VSP, but it evolved as a guiding framework, as we looked at our portfolio enablement process and asked ourselves, what’s the simplest way to approach project portfolio management? First, we decided to focus on the value. We started working with the business sponsors to articulate where and what impact the technology will have on the business. We then validate with finance, and if it has a hard savings, it gets No. 1 priority in terms of investment. The relentless focus on value also leads to the second point, which is strategic fit. The project may be valuable, but in any organization, the list of things the organization can do is always bigger than what the organization can afford or should afford. This is a capital allocation discussion? So we focus on the strategic fit. 


Cisco spotlights generative AI in security, collaboration

Security and IT administrators will be able to describe granular security policies and the assistant willl evaluate how to best implement them across different aspects of their security infrastructure, Patel said. At the Live! event, Cisco demoed how a generative Cisco Policy Assistant can reason with the existing set of firewall policy rules to implement and simplify them within the Cisco Secure Firewall Management Center. Cisco says it is the first of many examples of how generative AI can reimagine policy management across the Cisco Security Cloud. ... In addition, he said the security assistant will let customers describe and contextualize events across email, the web, endpoints, and the network to tell security operation center (SOC) analyst exactly what happened, the impact, and best next steps to take to remediate problems and set new policies. The SOC Assistant will provide a comprehensive situation analysis for analysts, correlating intel across the Cisco Security Cloud, relaying potential impacts, and providing recommended actions with the goal of reducing the time needed for SOC teams to respond to potential threats, he said.


How WASM (and Rust) Unlocks the Mysteries of Quantum Computing

Rather than picking from fixed specs, quantum programming can require you to define the setup of your quantum hardware, describing the quantum circuit that will be formed by the qubits and as well as the algorithm that will run on it — and error-correcting the qubits while the job is running — with a language like OpenQASM; that’s rather like controlling an FPGA with a hardware description language like Verilog. You can’t measure a qubit to check for errors directly while it’s working or you’d end the computation too soon, but you can measure an extra qubit and extrapolate the state of the working qubit from that. What you get is a pattern of measurements called a syndrome. In medicine, a syndrome is a pattern of symptoms used to diagnose a complicated medical condition like fibromyalgia. In quantum computing, you have to “diagnose” or decode qubit errors from the pattern of measurements, using an algorithm that can also decide what needs to be done to reverse the errors and stop the quantum information in the qubits from decohering before the quantum computer finishes running the program.


Energy security needs a secure IoT

The IoT has a central role to play as governments and industries work to reduce dependence on fossil fuels, establish new forms of energy generation and implement sufficient means of storing, managing and distributing energy. ... IoT connected devices and systems can contribute carbon tracking and smart-meter energy monitoring; they can enable data exchange for microgrids and support mechanisms for selling energy directly back into the network. These solutions will transmit data so that energy companies can monitor devices and conditions, control devices in remote locations, track performance to predict maintenance cycles and act on alerts. They will be able to monitor energy consumption for smart metering through connected meters and sensors for load balancing on the grid. In this way, connectivity is part of the intelligent, efficient, renewable energy model, however it must be cybersecure. As new and additional devices are deployed, they could present more pathways for potential cyberattacks. That is a significant risk and safeguards are therefore needed to protect against unauthorised access to devices, networks, management platforms and cloud infrastructure. 


How to Get Unstuck From Stress and Find Solutions Inside Yourself

The balance of sympathetic and parasympathetic states is critical both for our well-being and for the cultivation of presence. Neither state is superior to the other. They are opposite and equal in their importance. Both are needed to dynamically maintain the homeostasis of the body. (Remember, a state of polarity is the ability to go from one state to the other in alternation, as needed.) As with any ecosystem, complementary forces are necessary to preserve harmony. The trouble is that our regular thinking and doing in the world of business are sympathetically activating. It is not possible to use only the mind to become relaxed and restore balance to the nervous system. We need to counterbalance our SNS (sympathetic nervous system) activation through feeling and being. This is a whole new mode that many high-powered leaders are less familiar with and may not entirely trust. The good news, however, is that when we are in a relaxed, parasympathetic state, we can access the capabilities of our higher intelligence that we need for presence and collaboration, such as visualization and spontaneous generative creativity.


Daily Standups May Not Improve Your Team’s Agility

To make sure every team member gets the support they need, I highly recommend having at least once per week a longer team meeting, something we call “team time”. This meeting should be 30–45 min long and ensure there is enough time to really get to the bottom of a problem and find a solution. Every team member can propose a topic and the team discusses it together. If there are no challenges to discuss, this is also a great forum for other ways of knowledge share. When you are summing up these costs, you will be in a similar or even more expensive range than daily standups, but those meetings are actually helpful since they allow the team to solve problems and share knowledge and, with that, replace other meetings and make work more efficient. The social aspect is something that is rarely stated as a need for daily standups. But, for me, this is a misconception. A healthy and social team will always be an efficient team. Developing a proper team atmosphere and spirit should be key and in the interest of everyone. 


Everything Is Connected: Five IoT Trends Moving Forward

In what sounds like old news at this point, cybersecurity will continue to be at the forefront of business decision making. What is different this year is the rise of artificial intelligence (AI) and ML. AI and ML are making malicious actors more efficient and potentially more effective when carrying out attacks. Natural Language Models such as ChatGPT have opened new directions of attack as well as lowering the overall threshold for creating effective malicious code. Additionally, the changing legislative landscape around privacy will spur companies to take a hard look at the way that they collect, use, and retain sensitive personal data. This may require a complete redesign of products, procedures, or in fact, entire business models. ... Finally, it is no secret that the tech labor market is in a state of upheaval. Many companies are reducing or restricting their workforces as they seek efficiency or profits. This exodus of talented tech professionals has created severe knowledge gaps that must be addressed.


API Management Is a Commodity: What’s Next?

As API management software unbundles the gateway and adapts to the multi-gateway world, new and emerging software vendors are looking to fill the resulting requirement gaps for API design and development, security, analytics, portals, and marketplaces. Alex Walling, field CTO for Rapid, sees that developers need a layer of abstraction on top of their existing API gateways, such as those from WSO2, Kong, and Apigee so that they can find APIs easily and check whether someone has already developed an API for what they need. Moreover, Derric Gilling, CEO of Moesif, said he believes that API Gateways will become just one of the specialized pieces of the API stack developers and organizations will need to assemble to meet the growing adoption of APIs. He sees business models for APIs evolving beyond simply charging for API invocation counts, and the need for a specialized analytics solution to keep pace. Along with the continued explosion of interest in APIs, especially as organizations use more third-party APIs, the development and testing process becomes more complex and time-consuming.


AI: Interpreting regulation and implementing good practice

Emerging standards, guidance and regulation for AI are being created worldwide, and it will be important to align this and create a common understanding for producers and consumers. Organizations such as ETSI, ENISA, ISO and NIST are creating helpful cross-referenced frameworks for us to follow, and regional regulators, such as the EU, are considering how to penalize bad practices. In addition to being consistent, however, the principles of regulation should be flexible, both to cater for the speed of technological development and to enable businesses to apply appropriate requirements to their capabilities and risk profile. An experimental mindset, as demonstrated by the Singapore Land Transport Authority’s testing of autonomous vehicles, can allow academia, industry and regulators to develop appropriate measures. These fields need to come together now to explore AI systems’ safe use and development. Cooperation, rather than competition, will enable safer use of this technology more quickly.



Quote for the day:

"Men who are in earnest are not afraid of consequences." -- Marcus Garvey