Showing posts with label data analysis. Show all posts
Showing posts with label data analysis. Show all posts

Daily Tech Digest - December 05, 2022

Is SASE right for your organization? 5 key questions to ask

Many analysts say that SASE is particularly beneficial for mid-market companies because it replaces multiple, and often on-premises, tools with a unified cloud service. Many large enterprises, on the other hand, will not only have legacy constraints to consider, but they may also prefer to take a layered security approach with best-of-breed security tools. Another factor to consider is that the SASE offering might be presented as a consolidated solution, but if you dig a little deeper is might actually be a collection of different tools from various partnering vendors, or features obtained through acquisition that have not been fully integrated. Depending on the service provider, SASE offers a unified suite of security services, including but not limited to encryption, multifactor authentication, threat protection, Data Leak Prevention (DLP), DNS, and traditional firewall services. ... With incumbents such as Cisco, VMware, and HPE all rolling out SASE services, enterprises with existing vendor relationships may be able to adopt SASE without needing to worry much about protecting previous investments.


How gamifying cyber training can improve your defences

Gamification is an attempt to enhance systems and activities by creating similar experiences to those in games, in order to motivate and engage users, while building their confidence. This is typically done through the application of game-design elements and game principles (dynamics and mechanics) in non-game contexts. Research into gamification has proved that it has positive effects. ... Gamification has been dismissed by some as a fad, but the application of elements found within game playing, such as competing or collaborating with others and scoring points, can effectively translate into staff training and improve engagement and interest. “The way that cyber security training sessions are happening is changing and it’s for the better,” says Helen McCullagh, a cyber risk specialist for an end-user organisation. “If you look at the engagement of sitting people down and them doing a one-hour course every year, then it is merely a box-ticking exercise. Organisations are trying to get 100% compliance, but what you have are people sitting there doing their shopping list.”


The 3 Phases Of The Metaverse

There are several misconceptions about the metaverse today. In simple terms, the metaverse is the convergence of physical and digital on a digital plane. In its ideal phase, you can access the metaverse from anywhere, just like the internet. Early metaverse apps were focused on creating games with tokenized incentives (play-to-earn) and hadn’t initially been thought of as contributing to the next phase of the internet. One of the most prominent examples is the online game Second Life, which is regarded as the earliest web2-based metaverse platform. Users have an identity projected through an avatar and participate in activities—very much a limited “second” life. ... Unlike the previous phase, Phase 2 is all about creating utilities. Brands, IP holders and companies investing in innovation have been collaborating with gaming metaverse dApps to understand consumer behaviors and economic dynamics. No-coding tools, as well as software development kits, in this phase, are empowering the end user to co-create alongside developers, designers, brands and retail investors. Still, interoperability—the import and export of digital assets—is only possible on a single chain, and the user experience is still seen as gaming in 2-D or 3-D environments.


Why the Agile approach might not be working for your projects

Although Scrum is a well-described methodology, when applied in practice it is often tailored to the specific circumstances of the organisation. These adaptations are often called ScrumBut (“we use Scrum, but …”). Some deviations from the fundamental principles of Scrum, however, may be problematic. These undesirable deviations are called anti-patterns — bad habits formed and influenced by the human factor. What exactly can we consider an anti-pattern? It can be a disagreement on whether or not the task is completed, a disruption caused by the customer, unclear items in the backlog, the indecisiveness of stakeholders (customers, management, etc.), and lack of authority or poor technical knowledge on the part of the Scrum master. We collected detailed information in three Scrum teams using a variety of data collection procedures over a sustained period of time — including observation, surveys, secondary data, and semi-structured interviews – to get a detailed understanding of anti-patterns, and their causes and consequences.


Rise of Data and Asynchronization Hyped Up at AWS re:Invent

Because it was believed that asynchronous programming was difficult, he said, operating systems tended to have restrained interfaces. “If you wanted to write to the disk, you got blocked until the block was written,” Vogels said. Change began to emerge in the 1990s, he said, with operating systems designed from the ground up to expose asynchrony to the world. “Windows NT was probably the first one to have asynchronous communication or interaction with devices as a first principle in the kernel.” Linux, Vogels said, did not pick up asynchrony until the early 2000s. The benefit of asynchrony, he said, is it is natural compared with the illusion of synchrony. When compute systems are tightly coupled together, it could lead to widespread failure if something goes wrong, Vogels said. With asynchronous systems, everything is decoupled. “The most important thing is that this is an architecture that can evolve very easily without have to change any of the other components,” he said. “It is a natural way of isolating failures. If any of the components fails, the whole system continues to work.”


Entity Framework Fundamentals

EF has two ways of managing your database. In this tutorial, I will explain only one of them; code first. The other one is the database first. There is a big difference between them, but code first is the most used. But before we dive in, I want to explain both approaches. Database first is used when there is already a database present and the database will not be managed by code. Code first is used when there is no current database, but you want to create one. I like code first much more because I can write entities (these are basically classes with properties) and let EF update the database accordingly. It's just C# and I don't have to worry about the database much. I can create a class, tell EF it's an entity, update the database, and all is done! Database first is the other way around. You let the database 'decide' what kind of entities you get. You create the database first and create your code accordingly. ... With Entity Framework, it all starts with a context. It associates entities and relationships with an actual database. Entity Framework comes with DbContext, which is the context that we will be using in our code.


How Executive Coaching Can Help You Level Up Your Organization

As we all know, the desire for personal growth is extremely valuable- however, as employee demands from the workplace have shifted, leadership skills have not. As employees climb the ranks, they find their way into leadership without necessarily learning the skills and techniques required to lead. Many new leaders turn to a trusted mentor who would only provide information based on lived experience. On the other hand, executive coaches are tasked with improving performances and capabilities as their day job. But there is a misconception that executive coaches are for leaders who have done something wrong. While it's true that an executive coach could support a difficult employee become a better teammate, they can also be guides for leaders to pursue their desired career paths. Leadership coaching explains that the main drivers of innovation in an organization are the people and the corporate culture, and it can provide leaders with the tools to master these levers. An executive coaching professional can guide leaders through the steps that allow them to set the foundations of an innovative and competitive company.


Ransomware: Is there hope beyond the overhyped?

The old way of thinking about cyber security was imagining it like a castle. You’ve got the vast perimeter – the castle walls – and inside was the keep, where employees and data would live. But now organisations are operating in various locations. They’ve got their cloud estate in one or more providers, source code residing in another location, and vast amounts of work devices that are now no longer behind the castle walls, but at employees’ homes – the list could go on for ever. These are all areas that could potentially be breached and used to gain intelligence on the business. The attack surface is growing, and the castle wall can no longer circle around all these places to protect them. Attack surface management will play a big part in tackling this issue. It allows security and IT teams to almost visualise the external parts of the business and identify targets and assesses risks based on the opportunities they present to a malicious attacker. In the face of a constantly growing attack surface, this can enable businesses to establish a proactive security approach and adopt principles such as assume breach and cyber resilience.


How data analysts can help CIOs bridge the tech talent shortfall

Business analytics are only as good as the data they’re using. Given the wealth and complexity of data, it’s easy to understand why leaders are often overwhelmed in their attempts to access better analytics and insights. This is where data professionals can help. Data scientists and analysts are statistics, math, databases, and systems experts. They are especially adept at looking at historical metrics, recognizing patterns, pulling in market insights, and identifying outlier data to ensure the best points are utilized. They’re also able to organize vast amounts of unstructured data, which is often very valuable but difficult to analyze, by leveraging conventional databases and other tools to make the data more actionable. ... It’s also important to look at the attributes of the data scientists and analysts themselves. In addition to having technical skills, data professionals with a background in programming, data visualization, and machine learning are also highly valuable. On the non-technical side, they should have strong interpersonal and communication skills to relay their findings to the tech team and those without a tech or math background.


What Does Technical Debt Tell You?

Making most architectural decisions at the beginning of a project, often before the QARs are precisely defined, results in an upfront architecture that may not be easy to evolve and will probably need to be significantly refactored when the QARs are better defined. Contrastingly, having a continuous flow of architectural decisions as part of each Sprint results in an agile architecture that can better respond to QAR changes. Almost every architectural decision is a trade-off between at least two QARs. For example, consider security vs. usability. Regardless of the decision being made, it is likely to increase technical debt, either by making the system more vulnerable by giving priority to usability or making it less usable by giving priority to security. Either way, this will need to be addressed at some point in the future, as the user population increases, and the initial decision to prioritize one QAR over the other may need to be reversed to keep the technical debt manageable. Other examples include scalability vs. modifiability, and scalability vs. time to market. These decisions are often characterized as "satisficing", i.e., "good enough". 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - October 25, 2022

Digital-first businesses more willing to accept some fraud

“For new companies, it’s about growth – fraud in that regard can be something like you have a promotion on and people are signing up for multiple accounts. “Digital transformers are also trying to compete, so accessibility, speed, low friction and completion rates rank above compliance. Ten years ago, compliance was higher, but for digital-first companies, user experience concerns are at the top of the agenda.” However, Li stressed that this was not to say that businesses are neglecting their legal compliance obligations, but more that they simply would not survive if they provided an archaic experience that caused friction for the potential customer, such as demanding they send notarised documents through the mail. Indeed, nearly half – 46% – of respondents did feel that their customer onboarding process was still too complex, rising to 55% in the UK. Frequent complaints were that it took too long to review and verify customers when onboarding them, leading to user drop-off, increased costs, and lost revenues.


Enhance Data Analytics with oneDAL

Intel® oneAPI Data Analytics Library (oneDAL) is a library with all the building blocks required to create distributed-data pipelines to transform, process, and model data. complete with all the architectural flexibility of oneAPI. This can be achieved using Intel® Distribution for Python*, C++, or Java APIs that can connect to familiar data sources such as Spark* and Hadoop*. ... oneDAL has tools for transferring out-of-memory data sources, such as databases and text files, into memory for use in analysis, training, or prediction stages. And if the data source cannot fit into memory, the algorithms in oneDAL also support streaming data into memory. Data scientists often spend large amounts of time preparing the data for analysis or machine learning (ML). This includes converting data to numeric representation, adding or removing data, normalizing it, or computing statistics. oneDAL offers algorithms that accelerate these preparation tasks, speeding the turnaround of steps that are often performed interactively.


Google Unveils Its Latest Voice Innovations

Since releasing its first speech patent in 2001, Google has led the way in voice innovation. From interacting with Google Assistant to live captioning in Google Meet, it now boasts an extensive voice suite of tools. Within this are two core innovations: its Speech-to-Text and Text-to-Speech APIs. The Speech-to-Text API supports short and long form speech in over 75 languages and 120+ locales – out-of-the-box – without the need for training and customization. Of course, for some use cases, businesses may demand customization. As such, the API is flexible, allowing users to harness it across various audio channels. It also detects multiple speakers in the same channel, with the solution recognizing their unique voices. ... Moreover, companies can create captions and subtitles for media content or build a virtual agent. Yet, it is also possible to use the technology for speech analysis, summarization, and extraction – each of which has significant potential for contact centers. In tandem, many businesses harness Google’s Text-to-Speech API to communicate with their users. It allows them to take text and synthesize it into audio in a single step.


Why Sensors Are Key IoT Cybersecurity

Sensors enabled by the Internet of Things are network-connected smart devices that collect and transmit real-time data about their environment. The data they provide lets people make better-informed decisions. The use of IoT sensors has grown explosively in recent years because their increasing functionality, small size, and low power consumption allow designers to deploy them in new applications to increase productivity and lower costs. The sensors are being used in new ways to maximize the capabilities of network-connected systems and infrastructure. The sensors are poised for mass-scale adoption in fields including automotive, health care, industrial automation, energy, and smart cities. But the lack of standardization in IoT sensors, coupled with interoperability challenges, has made them vulnerable to cyberattacks—which creates barriers for their ubiquitous use. Hackers are targeting IoT sensors in greater numbers, in more industries, and with increased sophistication.


Transforming Observability

Digital transformation, product and technology leaders see value in observability because of its potential to measure digital experiences and measure the performance of business and digital services. To do this requires observability to meet three significant challenges. First, observability must effectively cross the complex boundaries of microservices, containers, cloud and traditional applications, multiple cloud providers, database sources, SaaS services, infrastructure and internal and external APIs. Today’s challenge is far beyond the central aggregation of large volumes of log data and suppressing non-essential alerts. Most enterprise architectures look eerily similar to a breadboard wiring project with applications, systems and data sources crisscrossing each other, representing the various pathways and interfaces across systems. Virtually any of these elements could contribute to the degradation of a digital experience, and observability must operate across these elements whether they live in our tightly controlled data centers or are distributed in microservices, cloud services or third-party interfaces.


Web 3.0 and the Crowdpoint Constellation

Web 3.0 is about the individual. The underlying technologies that will enable it are personal identification technologies (biometrics), the blockchain and distributed data technology. Let’s not worry about how, right now, let’s just paint the picture. Web 2.0 was all about exploiting data — a great deal of which was your data. The big web businesses mined it to their great enrichment, with the best AI tools known to man. However, it is equally possible for people to band together and mine their collective personal data to their own benefit. This has not yet happened, but the technologies mentioned above make it possible. Now if it were up to the individual to do this on their own initiative, of course, probably nothing would happen. ... If you’ve been tracking the evolution of the blockchain world you will realize that it has evolved a long way beyond the creation and marketing of cryptocurrencies. It is no longer all about speculation. It has stepped boldly into the financial sector, with the creation of services that are commonly described as Open Fi (Open Finance) or De Fi (Decentralized Finance).


Improving finance and accounting software with AI

Starting with audit analytics, auditors tend to spend too much time buried in compliance checklists and creating reports that few people read, with little time to seek anomalies in every transaction. Rather than manually sampling data points, Forrester says machine learning is being used for risk assessment of transactions. The member-based industry association American Institute of Certified Public Accountants (AICPA) is developing guidance for ML in the audit function. Mature audit support providers such as Thomson Reuters and Wolters Kluwer, as well as emerging companies like Caseworks Cloud and MindBridge, are embedding AI into their audit platforms. ... Starting with audit analytics, auditors tend to spend too much time buried in compliance checklists and creating reports that few people read, with little time to seek anomalies in every transaction. Rather than manually sampling data points, Forrester says machine learning is being used for risk assessment of transactions. The member-based industry association American Institute of Certified Public Accountants (AICPA) is developing guidance for ML in the audit function. 


Atlassian Vulnerabilities Highlight Criticality of Cloud Services

The combination of the two flaws could allow a significant attack, says Jake Shafer, a security consultant with Bishop Fox, who found the flaws. "Using the authorization finding would allow a low-privileged user to elevate their role to super admin which, in terms of information disclosure, would allow the attacker to gain access to everything the client of the SaaS had in their Jira deployment," he says. "From there, the attacker could then leverage the SSRF finding to go after the infrastructure of Atlassian themselves." Both vulnerabilities have been patched — the first within a week and the second within a month, according to the disclosure timeline published by Bishop Fox. However, companies should note that the increasing reliance on cloud applications has made attacks on cloud services and workloads much more common, so much so that the top class of vulnerability, according to the Open Web Application Security Project (OWASP), is broken authentication and access-control issues.


When CISOs are doomed to fail, and how to improve your chances of success

Sometimes, CISO candidates can spot a bad employer during the interview process. "You are not only trying to convince them that you are the person they should hire, but you are interviewing them," Callas says. The recruiting process is just like zero-knowledge proof, because neither side wants to be upfront about what is going on. One of Callas's priorities is to learn how much the company cares about security, and he does that by asking direct questions. One time, an executive he talked to admitted that management did not want better protection. A typical question potential CISOs are asked is what they might do in a difficult situation such as a breach. When Callas hears this, he smiles and says: “Has this actually happened?” Sometimes they'll say, 'Oh, no, no, no,' in a way that you know means yes," he adds, "and every so often, you get the person who looks around and says: 'Let me tell you what's really going on.'" Another priority should be understanding to whom the CISO reports: the CEO, the CFO, the CTO, or even the legal department. “[This] tells you a little bit about what they expect you to do," says Chip Gibbons, CISO at Thrive.


Why Functional Programming Should Be The Future Of Software Development

Pure functional programming solves many of our industry’s biggest problems by removing dangerous features from the language, making it harder for developers to shoot themselves in the foot. At first, these limitations may seem drastic, as I’m sure the 1960s developers felt regarding the removal of GOTO. But the fact of the matter is that it’s both liberating and empowering to work in these languages—so much so that nearly all of today’s most popular languages have incorporated functional features, although they remain fundamentally imperative languages. The biggest problem with this hybrid approach is that it still allows developers to ignore the functional aspects of the language. Had we left GOTO as an option 50 years ago, we might still be struggling with spaghetti code today. To reap the full benefits of pure functional programming languages, you can’t compromise. You need to use languages that were designed with these principles from the start. Only by adopting them will you get the many benefits that I’ve outlined here. But functional programming isn’t a bed of roses. It comes at a cost.



Quote for the day:

"Make heroes out of the employees who personify what you want to see in the organization." -- Anita Roddick

Daily Tech Digest - September 28, 2022

How to Become an IT Thought Leader

Being overly tech-centric is a common mistake aspiring thought leaders make. Such individuals start with a technology, then look for problems to solve. “Instead, it's important to remember that an IT thought leader drives digital change,” Zhao says. “Understanding the technology is only one aspect of IT thought leadership.” Ross concurs. “I’ve seen several troubling examples of large technology purchases occurring before key business requirements were fully understood,” he says. “Seek first to understand the desired business outcomes and remember that technology is a potential enabler of those outcomes, but never a cure-all.” A strong business case is essential for any proposed new technology, Bethavandu says. “If your company is not ready for, say, DevOps or containerization, be self-aware and don’t push for those projects until your organization is ready,” he states. On the other hand, excessive caution can also be dangerous. “If you want to be a thought leader, you have to be bold and you cannot be afraid of failing,” Bethavandu says.


Most Attackers Need Less Than 10 Hours to Find Weaknesses

Overall, nearly three-quarters of ethical hackers think most organizations lack the necessary detection and response capabilities to stop attacks, according to the Bishop Fox-SANS survey. The data should convince organizations to not just focus on preventing attacks, but aim to quickly detect and respond to attacks as a way to limit damage, Bishop Fox's Eston says. "Everyone eventually is going to be hacked, so it comes down to incident response and how you respond to an attack, as opposed to protecting against every attack vector," he says. "It is almost impossible to stop one person from clicking on a link." In addition, companies are struggling to secure many parts of their attack surface, the report stated. Third parties, remote work, the adoption of cloud infrastructure, and the increased pace of application development all contributed significantly to expanding organizations' attack surfaces, penetration testers said. Yet the human element continues to be the most critical vulnerability, by far. 


Discover how technology helps manage the growth in digital evidence

With limited resources, even the most skilled law-enforcement personnel are hard-pressed to comb through terabytes of data that may include hours of videos, tens of thousands of images, and hundreds of thousands of words in the form of text, email, and other sources. One possible solution is to augment skilled investigators and forensic examiners with technology. Some of the key technological capabilities that can be applied to this problem are AI and machine learning. AI and machine learning models and applications create processes that read, watch, extract, index, sort, filter, translate, and transcribe information from text, images, and video. By utilizing technology to carve through and analyze data, it’s possible to reduce the data mountain to a series of small hills of related content and add tags that make it searchable. That allows people to spend their time and energy on work that is most valuable in the investigation. The good news is that help is available. Microsoft has multiple AI and machine learning processes within our Microsoft Azure Cognitive Services. 


The modern enterprise imaging and data value chain

The costs and consequences of the current fragmented state of health care data are far-reaching: operational inefficiencies and unnecessary duplication, treatment errors, and missed opportunities for basic research. Recent medical literature is filled with examples of missed opportunities—and patients put at risk because of a lack of data sharing. More than four million Medicare patients are discharged to skilled nursing facilities (SNFs) every year. Most of them are elderly patients with complex conditions, and the transition can be hazardous. ... “Weak transitional care practices between hospitals and SNFs compromise quality and safety outcomes for this population,” researchers noted. Even within hospitals, sharing data remains a major problem. ... Data silos and incompatible data sets remain another roadblock. In a 2019 article in the journal JCO Clinical Cancer Informatics, researchers analyzed data from the Cancer Imaging Archive (TCIA), looking specifically at nine lung and brain research data sets containing 659 data fields in order to understand what would be required to harmonize data for cross-study access.


Cloud’s key role in the emerging hybrid workforce

One key to the mistakes may be the overuse of cloud computing. Public clouds provide more scalable and accessible systems on demand, but they are not always cost-effective. I fear that much like when any technology becomes what the cool kids are using, cloud is being picked for emotional reasons and not business reasons. On-premises hardware costs have fallen a great deal during the past 10 years. Using these more traditional methods of storage and compute may be way more cost-effective than the cloud in some instances and may be just as accessible, depending on the location of the workforce. My hope is that moving to the cloud, which was accelerated by the pandemic, does not make us lose sight of making business cases for the use of any technology. Another core mistake that may bring down companies is not having security plans and technology to support the new hybrid workforce. Although few numbers have emerged, I suspect that this is going to be an issue for about 50% of companies supporting a remote workforce.


Why zero trust should be the foundation of your cybersecurity ecosystem

Recently, zero trust has developed a large following due to a surge in insider attacks and an increase in remote work – both of which challenge the effectiveness of traditional perimeter-based security approaches. A 2021 global enterprise survey found that 72% respondents had adopted zero trust or planned to in the near future. Gartner predicts that spending on zero trust solutions will more than double to $1.674 billion between now and 2025. Governments are also mandating zero trust architectures for federal organizations. These endorsements from the largest organizations have accelerated zero trust adoption across every sector. Moreover, these developments suggest that zero trust will soon become the default security approach for every organization. Zero trust enables organizations to protect their assets by reducing the chance and impact of a breach. It also reduces the average breach cost by at least $1.76 million, can prevent five cyber disasters per year, and save an average of $20.1 million in application downtime costs.


Walls between technology pros and customers are coming down at mainstream companies

Tools assisting with this engagement include "prediction, automation, smart job sites and digital twins," he says. "We have resources in each of our geographic regions where we scale new technology from project to project to ensure the 'why' is understood, provide necessary training and support, and educate teams on how that technology solution makes sense in current processes and day-to-day operations." At the same time, getting technology professionals up to speed with crucial pieces of this customer collaboration -- user experience (UX) and design thinking -- is a challenge, McFarland adds. "There is a widely recognized expectation to create seamless and positive customer experiences. That said, specific training and technological capabilities are a headwind that professionals are experiencing. While legacy employees may be fully immersed and knowledgeable about a certain program and its technical capabilities, it is more unusual to have both the technical and UX design expertise. The construction industry is working to find the right balance of technology expertise and awareness with UX and design proficiencies."


Why Is the Future of Cloud Computing Distributed Cloud?

Distributed cloud freshly redefines cloud computing. It states that a distributed cloud is a public cloud architecture that handles data processing and storage in a distributed manner. Said, a business using dispersed cloud computing can store and process its data in various data centers, some of which may be physically situated in other regions. A content delivery network (CDN), a network architecture that is geographically spread, is an example of a distributed cloud. It is made to send content (most frequently video or music) quickly and efficiently to viewers in various places, significantly lowering download speeds. Distributed clouds, however, offer advantages to more than just content producers and artists. They can be utilized in multiple business contexts, including transportation and sales. It is possible to use a distributed cloud even in particular geographical regions. For instance, a supplier of file transfer services can format video and store content on CDNs spread out globally using centralized cloud resources.


How to Become a Data Analyst – Complete Roadmap

First, understand this, the field of Data Analyst is not about computer science but about applying computational, analysis, and statistics. This field focuses on working with large datasets and the production of useful insights that helps in solving real-life problems. The whole process starts with a hypothesis that needs to be answered and then involvement in gathering new data to test those hypotheses. There are 2 major categories of Data Analyst: Tech and Non-Tech. Both of them work on different tools and Tech domain professionals are required to possess knowledge of required programming languages too (such as R or Python). The working professional should be fluent in statistics so that they can present any given amount of raw data into a well-aligned structure. ... Today, Billions of companies are generating data on daily basis and using it to make crucial business decisions. It helps in deciding their future goals and setting new milestones. We’re living in a world where Data is the new fuel and to make it useful data analysts are required in every sector. 


Software developer: A day in the life

An analytics role will require you to learn new skills continuously, look at things in new ways, and embrace new perspectives. In technology and business, things happen quickly. It is important to always keep up with what is happening in the industries in which you are involved. Never forget that at its core, technology is about problem-solving. Don’t get too attached to any coding language; just be aware that you probably won’t be able to use the language you like, do the refactor you want, or perform the update you expect all the time. The end focus is always on the client, and their needs take priority over developer preferences. Be prepared to use English every day. To keep your skills sharp, read documentation, talk to others often, and watch videos. ... Any analytics professional who is interested in elevating their career should always be attentive to new technologies and updates, become an expert in some specific language/technology, and understand the low level of programming in a variety of languages. Finally, if you enjoy logic, math, and problem-solving, consider a career in software development. The world needs your skills to solve big challenges.



Quote for the day:

"Leadership Principle: As hunger increases, excuses decrease." -- Orrin Woodward

Daily Tech Digest - February 02, 2022

These hackers are hitting victims with ransomware in an attempt to cover their tracks

Once installed on a compromised machine, PowerLess allows attackers to download additional payloads, and steal information, while a keylogging tool sends all the keystrokes entered by the user direct to the attacker. Analysis of PowerLess backdoor campaigns appear to link attacks to tools, techniques and motivations associated with Phosphorus campaigns. In addition to this, analysis of the activity seems to link the Phosphorus threat group to ransomware attacks. One of the IP addresses being used in the campaigns also serves as a command and control server for the recently discovered Momento ransomware, leading researchers to suggest there could be a link between the ransomware attacks and state-backed activity. "A connection between Phosphorus and the Memento ransomware was also found through mutual TTP patterns and attack infrastructure, strengthening the connection between this previously unattributed ransomware and the Phosphorus group," said the report. Cybereason also found a link between a second Iranian hacking operation, named Moses Staff, and additional ransomware attacks, which are deployed with the aid of another newly identified trojan backdoor, dubbed StrifeWater.


Managing Technical Debt in a Microservice Architecture

Paying down technical debt while maintaining a competitive velocity delivering features can be difficult, and it only gets worse as system architectures get larger. Managing technical debt for dozens or hundreds of microservices is much more complicated than for a single service, and the risks associated with not paying it down grow faster. Every software company gets to a point where dealing with technical debt becomes inevitable. At Optum Digital, a portfolio – also known as a software product line – is a collection of products that, in combination, serve a specific need. Multiple teams get assigned to each product, typically aligned with a software client or backend service. There are also teams for more platform-oriented services that function across several portfolios. Each team most likely is responsible for various software repositories. There are more than 700 engineers developing hundreds of microservices. They take technical debt very seriously because the risks of it getting out of control are very real.


How to approach modern data management and protection

European tech offers a serious alternative to US and Chinese models when it comes to data. It’s also a necessary alternative and must have an evolution towards European technologic autonomy, according to D’urso. “The loss of economic autonomy will impact political power. In other words, data and economic frailty will only further weaken Europe’s role at the global power table and open the door to a variety of potential flash points (military, cyber, industrial, social and so on). “Europe should be proud of its model, which re-injects tax revenues into a fair and respectful social and cultural framework. The GDPR policy is clearly at the heart of a European digital mindset.” Luc went further and suggested that data regulation, including management, protection and storage, is central to the upcoming French presidential election and the current French Presidency of the Council of the European Union. “The French Presidency of the Council of the EU will clearly place data protection into the spotlight of political debates. It is not about protectionism, but Europe must safeguard its data against foreign competition to enhance its autonomy and build a prosperous future.


Edge computing strategy: 5 potential gaps to watch for

Edge strategies that depend on one-off “snowflake” patterns for their success will cause long-term headaches. This is another area where experience with hybrid cloud architecture will likely benefit edge thinking: If you already understand the importance of automation and repeatability to, say, running hundreds of containers in production, then you’ll see a similar value in terms of edge computing. “Follow a standardized architecture and avoid fragmentation – the nightmare of managing hundreds of different types of systems,” advises Shahed Mazumder, global director, telecom solutions at Aerospike. “Consistency and predictability will be key in edge deployments, just like they are key in cloud-based deployments.” Indeed, this is an area where the cloud-edge relationship deepens. Some of the same approaches that make hybrid cloud both beneficial and practical will carry forward to the edge, for example. In general, if you’ve already been solving some of the complexity involved in hybrid cloud or multi-cloud environments, then you’re on the right path.


Top Scam-Fighting Tactics for Financial Services Firms

At its core, a scam is a situation in which the customer has been duped into initiating a fraudulent transaction that they believe to be authentic. Applying traditional controls for verifying or authenticating the activity may therefore fail. But the underlying ability to detect the anomaly remains critical. "Instead of validating the transaction or the individual, we are going to have to place more importance on helping the customer understand that what they believe to be legitimate is actually a lie," Mitchell of Omega FinCrime says. He says fraud operations teams will need to become more customer-centric, education-focused and careful in their interactions. Mitigating a scam apocalypse will require mobilization across the market, which includes financial institutions, solution providers, payment networks, regulators, telecom carriers, social media companies and law enforcement agencies. In the short term, investment priorities must expand beyond identity controls to include orchestration controls and decision support systems that allow financial institutions to see the interaction more holistically, Fooshee says.


Better Integration Testing With Spring Cloud Contract

Imagine a simple microservice with a producer and a consumer. When writing tests in the consumer project, you have to write mocks or stubs that model the behavior of the producer project. Conversely, when you write tests in the producer project, you have to write mocks or stubs that model the behavior of the consumer project. As such, multiple sets of related, redundant code have to be maintained in parallel in disconnected projects. ... “Mock” gets used in ways online that is somewhat generic, meaning any fake object used for testing, and this can get confusing when differentiating “mocks” from “stubs”. However, specifically, a “mock” is an object that tests for behavior by registering expected method calls before a test run. In contrast, a “stub” is a testable version of the object with callable methods that return pre-set values. Thus, a mock checks to see if the object being tested makes an expected sequence of calls to the object being mocked, and throws an error if the behavior deviates (that is, makes any unexpected calls). A stub does not do any testing itself, per se, but instead will return canned responses to pre-determined methods to allow tests to run.


Now for the hard part: Deploying AI at scale

Fortunately, AI tools and platforms have evolved to the point in which more governable, assembly-line approaches to development are possible, most of which are being harnessed under the still-evolving MLOps model. MLOps is already helping to cut the development cycle for AI projects from months, and sometimes years, down to as little as two weeks. Using standardized components and other reusable assets, organizations are able to create consistently reliable products with all the embedded security and governance policies needed to scale them up quickly and easily. Full scalability will not happen overnight, of course. Accenture’s Michael Lyman, North American lead for strategy and consulting, says there are three phases of AI implementation. ... To accelerate this process, he recommends a series of steps, such as starting out with the best use cases and then drafting a playbook to help guide managers through the training and development process. From there, you’ll need to hone your institutional skills around key functions like data and security analysis, process automation and the like.


How to measure security efforts and have your ideas approved

In the world of cybersecurity, the most frequently asked question focuses on “who” is behind a particular attack or intrusion – and may also delve into the “why”. We want to know whom the threat actor or threat agent is, whether it is a nation state, organized crime, an insider, or some organization to which we can ascribe blame for what occurred and for the damage inflicted. Those less familiar with cyberattacks may often ask, “Why did they hack me?” As someone who has been responsible for managing information risk and security in the enterprise for 20-plus years, I can assure you that I have no real influence over threat actors and threat agents – the “threat” part of the above equation. These questions are rarely helpful, providing only psychological comfort, like a blanket for an anxious child, and quite often distract us from asking the one question that can really make a difference: “HOW did this happen?” But even those who asked HOW – have answered with simple vulnerabilities – we had an unpatched system, we lacked MFA, or the user clicked on a link.

Data analysts are one of the data consumers. A data analyst answers questions about the present such as: what is going on now? What are the causes? Can you show me XYZ? What should we do to avoid/achieve ABC? What is the trend in the past 3 years? Is our product doing well? ... Data scientists are another data consumer. Instead of answering questions about the present, they try to find patterns in the data and answer the questions about the future, i.e prediction. This technique has actually existed for a long time. You must have heard of it, it’s called statistics. Machine learning and deep learning are the 2 most popular ways to utilise the power of computers to find patterns in data. ... How do data analysts and scientists get the data? How does the data come from user behaviour to the database? How do we make sure the data is accountable? The answer is data engineers. Data consumers cannot perform their work without having data engineers set up the whole structure. They build data pipelines to ingest data from users’ devices to the cloud then to the database.


The Value of Machine Unlearning for Businesses, Fairness, and Freedom

Another way machine unlearning could deliver value for both individuals and organizations is the removal of biased data points that are identified after model training. Despite laws that prohibit the use of sensitive data in decision-making algorithms, there is a multitude of ways bias can find its way in through the back door, leading to unfair outcomes for minority groups and individuals. There are also similar risks in other industries, such as healthcare. When a decision can mean the difference between life-changing and, in some cases, life-saving outcomes, algorithmic fairness becomes a social responsibility and often algorithms may be unfair due to the data they are being trained on. For this reason, financial inclusion is an area that is rightly a key focus for financial institutions, and not just for the sake of social responsibility. Challengers and fintechs continue to innovate solutions that are making financial services more accessible. From a model monitoring perspective, machine unlearning could also safeguard against model degradation.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis

Daily Tech Digest - October 15, 2021

You’ve migrated to the cloud, now what?

When thinking about cost governance, for example, in an on-premises infrastructure world, costs increase in increments when we purchase equipment, sign a vendor contract, or hire staff. These items are relatively easy to control because they require management approval and are usually subject to rigid oversight. In the cloud, however, an enterprise might have 500 virtual machines one minute and 5,000 a few minutes later when autoscaling functions engage to meet demand. Similar differences abound in security management and workload reliability. Technologies leaders with legacy thinking are faced with harsh trade-offs between control and the benefits of cloud. These benefits can include agility, scalability, lower cost, and innovation and require heavy reliance on automation rather than manual legacy processes. This means that the skillsets of an existing team may be not the same skillsets needed in the new cloud order. When writing a few lines of code supplants plugging in drives and running cable, team members often feel threatened. This can mean that success requires not only a different way of thinking but also a different style of leadership.


A new edge in global stability: What does space security entail for states?

Observers recently recentred the debate on a particular aspect of space security, namely anti-satellite (ASAT) technologies. The destruction of assets placed in outer space is high on the list of issues they identify as most pressing and requiring immediate action. As a result, some researchers and experts rolled out propositions to advance a transparent and cooperative approach, promoting the cessation of destructive operations in both outer space and launched from the ground. One approach was the development of ASAT Test Guidelines, first initiated in 2013 by a Group of Governmental Experts on Outer Space Transparency and Confidence-Building Measures. Another is through general calls to ban anti-satellite tests, to not only build a more comprehensive arms control regime for outer space and prevent the production of debris, but also reduce threats to space security and regulate destabilising force. Many space community members threw their support behind a letter urging the United Nations (UN) General Assembly to take up for consideration a kinetic anti-satellite (ASAT) Test Ban Treaty for maintaining safe access to Earth orbit and decreasing concerns about collisions and the proliferation of space debris.


From data to knowledge and AI via graphs: Technology to support a knowledge-based economy

Leveraging connections in data is a prominent way of getting value out of data. Graph is the best way of leveraging connections, and graph databases excel at this. Graph databases make expressing and querying connection easy and powerful. This is why graph databases are a good match in use cases that require leveraging connections in data: Anti-fraud, Recommendations, Customer 360 or Master Data Management. From operational applications to analytics, and from data integration to machine learning, graph gives you an edge. There is a difference between graph analytics and graph databases. Graph analytics can be performed on any back end, as they only require reading graph-shaped data. Graph databases are databases with the ability to fully support both read and write, utilizing a graph data model, API and query language. Graph databases have been around for a long time, but the attention they have been getting since 2017 is off the charts. AWS and Microsoft moving in the domain, with Neptune and Cosmos DB respectively, exposed graph databases to a wider audience.


Observability Is the New Kubernetes

So where will observability head in the next two to five years? Fong-Jones said the next step is to support developers in adding instrumentation to code, expressing a need to strike a balance between easy and out of the box and annotations and customizations per use case. Suereth said that the OpenTelemetry project is heading in the next five years toward being useful to app developers, where instrumentation can be particularly expensive. “Target devs to provide observability for operations instead of the opposite. That’s done through stability and protocols.” He said that right now observability right now, like with Prometheus, is much more focused on operations rather than developer languages. “I think we’re going to start to see applications providing observability as part of their own profile.” Suereth continued that the OpenTelemetry open source project has an objective to have an API with all the traces, logs and metrics with a single pull, but it’s still to be determined how much data should be attached to it.


Data Exploration, Understanding, and Visualization

Many scaling methods require knowledge of critical values within the feature distribution and can cause data leakage. For example, a min-max scaler should fit training data only rather than the entire data set. When the minimum or maximum is in the test set, you have reduced some data leakage into the prediction process. ... The one-dimensional frequency plot shown below each distribution provides understanding to the data. At first glance, this information looks redundant, but these directly address problems when representing data in histograms or as distributions. For example, when data is transformed into a histogram, the number of bins is specified. It is difficult to decipher any pattern with too many bins, and with too few bins, the data distribution is lost. Moreover, representing data as a distribution assumes the data is continuous. When data is not continuous, this may indicate an error in the data or an important detail about the feature. The one-dimensional frequency plots fill in the gaps where histograms fail.


DevSecOps: A Complete Guide

Both DevOps and DevSecOps use some degree of automation for simple tasks, freeing up time for developers to focus on more important aspects of the software. The concept of continuous processes applies to both practices, ensuring that the main objectives of development, operation, or security are met at each stage. This prevents bottlenecks in the pipeline and allows teams and technologies to work in unison. By working together, development, operational or security experts can write new applications and software updates in a timely fashion, monitor, log, and assess the codebase and security perimeter as well as roll out new and improved codebase with a central repository. The main difference between DevOps and DevSecOps is quite clear. The latter incorporates a renewed focus on security that was previously overlooked by other methodologies and frameworks. In the past, the speed at which a new application could be created and released was emphasized, only to be stuck in a frustrating silo as cybersecurity experts reviewed the code and pointed out security vulnerabilities.


Skilling employees at scale: Changing the corporate learning paradigm

Corporate skilling programs have been founded on frameworks and models from the world of academia. Even when we have moved to digital learning platforms, the core tenets of these programs tend to remain the same. There is a standard course with finite learning material, a uniformly structured progression to navigate the learning, and the exact same assessment tool to measure progress. This uniformity and standardization have been the only approach for organizations to skill their employees at scale. As a result, organizations made a trade-off; content-heavy learning solutions which focus on knowledge dissemination but offer no way to measure the benefit and are limited to vanity metrics have become the norm for training the workforce at large. On the other hand, one-on-one coaching programs that promise results are exclusive only to the top one or two percent of the workforce, usually reserved for high-performing or high-potential employees. This is because such programs have a clear, measurable, and direct impact on behavioral change and job performance.


The Ultimate SaaS Security Posture Management (SSPM) Checklist

The capability of governance across the whole SaaS estate is both nuanced and complicated. While the native security controls of SaaS apps are often robust, it falls on the responsibility of the organization to ensure that all configurations are properly set — from global settings, to every user role and privilege. It only takes one unknowing SaaS admin to change a setting or share the wrong report and confidential company data is exposed. The security team is burdened with knowing every app, user and configuration and ensuring they are all compliant with industry and company policy. Effective SSPM solutions come to answer these pains and provide full visibility into the company's SaaS security posture, checking for compliance with industry standards and company policy. Some solutions even offer the ability to remediate right from within the solution. As a result, an SSPM tool can significantly improve security-team efficiency and protect company data by automating the remediation of misconfigurations throughout the increasingly complex SaaS estate.


Why gamification is a great tool for employee engagement

Gamification is the beating heart of almost everything we touch in the digital world. With employees working remotely, this is the golden solution for employers. If applied in the right format, gaming can help create engagement in today's remote working environment, motivate personal growth, and encourage continuous improvement across an organization. ... In the connected workspace, gamification is essentially a method of providing simple goals and motivations that rely on digital rather than in-person engagement. At the same time, there is a tacit understanding among both game designer and "player" that when these goals are aligned in a way that benefits the organization, the rewards often impact more than the bottom line. Engaged employees are a valuable part of defined business goals, and studies show that non-engagement impacts the bottom line. At the same time, motivated employees are more likely to want to make the customer experience as satisfying as possible, especially if there is internal recognition of a job well done.


10 Cloud Deficiencies You Should Know

What happens if your cloud environment goes down due to challenges outside your control? If your answer is “Eek, I don’t want to think about that!” you’re not prepared enough. Disaster preparedness plans can include running your workload across multiple availability zones or regions, or even in a multicloud environment. Make sure you have stakeholders (and back-up stakeholders) assigned to any manual tasks, such as switching to backup instances or relaunching from a system restore point. Remember, don’t wait until you’re faced with a worst-case scenario to test your response. Set up drills and trial runs to make sure your ducks are quacking in a row. One thing you might not imagine the cloud being is … boring. Without cloud automation, there are a lot of manual and tedious tasks to complete, and if you have 100 VMs, they’ll require constant monitoring, configuration and management 100 times over. You’ll need to think about configuring VMs according to your business requirements, setting up virtual networks, adjusting for scale and even managing availability and performance. 



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - October 13, 2021

Stop Using Microservices. Build Monoliths Instead.

Building out a microservices architecture takes longer than rolling the same features into a monolith. While an individual service is simple, a collection of services that interact is significantly more complex than a comparable monolith. Functions in a monolith can call any other public functions. But functions in a microservice are restricted to calling functions in the same microservice. This necessitates communication between services. Building APIs or a messaging system to facilitate this is non-trivial. Additionally, code duplication across microservices can’t be avoided. Where a monolith could define a module once and import it many times, a microservice is its own app — modules and libraries need to be defined in each. ... The luxury of assigning microservices to individual teams is reserved for large engineering departments. Although it’s one of the big touted benefits of the architecture, it’s only feasible when you have the engineering headcount to dedicate several engineers to each service. Reducing code scope for developers gives them the bandwidth to understand their code better and increases development speed.


DevOps at the Crossroads: The Future of Software Delivery

Even though DevOps culture is becoming mainstream, organizations are struggling with the increasing tool sprawl, complexity and costs. These teams are also dealing with a staggering (and growing) number of tools to help them get their work done. This has caused toil, with no single workflow and lack of visibility. At Clear Ventures, the problems hit close to home as 17 of the 21 companies we had funded had software development workflows that needed to be managed efficiently. We found that some of the companies simply did not have the expertise to build out a DevOps workflow themselves. On the other hand, other companies added expertise over time as they scaled up but that required them to completely redo their workflows resulting in a lot of wasted code and effort. We also noticed that the engineering managers struggled with software quality and did not know how to measure productivity in the new remote/hybrid working environment. In addition, developers were getting frustrated with the lack of ability to customize without a significant burden on themselves. 
A stateful architecture was invented to solve these problems, where database and cache are started in the same process as applications. There are several databases in the Java world that we can run in embedded mode. One of them is Apache Ignite. Apache Ignite supports full in-memory mode (providing high-performance computing) as well as native persistence. This architecture requires an intelligent load balancer. It needs to know about the partition distribution to redirect the request to the node where the requested data is actually located. If the request is redirected to the wrong node, the data will come over the network from other nodes. Apache Ignite supports data collocation, which guarantees to store information from different tables on the same node if they have the same affinity key. The affinity key is set on table creation. For example, the Users table (cache in Ignite terms) has the primary key userId, and the Orders table may have an affinity key - userId. 


Here’s Why You Should Consider Becoming a Data Analyst

Data analysts specialize in gathering raw data and being able to derive insights from it. They have the patience and curiosity to poke around large amounts of data until they find meaningful information from it — after which they clean and present their findings to stakeholders. Data analysts use many different tools to come up with answers. They use SQL, Python, and sometimes even Excel to quickly solve problems. The end goal of an analyst is to solve a business problem with the help of data. This means that they either need to have necessary domain knowledge, or work closely with someone who already has the required industry expertise. Data analysts are curious people by nature. If they see a sudden change in data trends (like a small spike in sales at the end of the month), they would go out of their way to identify if the same patterns can be observed throughout the year. They then try to piece this together with industry knowledge and marketing efforts, and provide the company with advice on how to cater to their audience.


Siloscape: The Dark Side of Kubernetes

Good IT behavior starts with the user. As someone who has witnessed the impacts of ransomware firsthand, I can attest to the importance of having good password hygiene. I recommend using unique, differentiated passwords for each user account, ensuring correct password (and data) encryption when static or in transit and keeping vulnerable and valuable data out of plaintext whenever possible. In the case of Kubernetes, you must ensure that you understand how to secure it from top to bottom. Kubernetes offers some of the most well-written and understandable documentation out there and includes an entire section on how to configure, manage and secure your cluster properly. Kubernetes can be an awesome way to level-up applications and services. Still, the importance of proper configuration of each Kubernetes cluster cannot be overstated. In addition to good hygiene, having a trusted data management platform in place is essential for making protection and recovery from ransomware like Siloscape less burdensome.


An Introduction to Hybrid Microservices

Put simply, a hybrid microservices architecture comprises a mix of the two different architectural approaches. It comprises some components that adhere to the microservices architectural style and some other components that follow the monolithic architectural style. A hybrid microservices architecture is usually comprised of a collection of scalable, platform-agnostic components. It should take advantage of open-source tools, technologies, and resources and adopt a business-first approach with several reusable components. Hybrid microservices architectures are well-suited for cloud-native, containerized applications. A hybrid microservices-based application is a conglomeration of monolithic and microservices architectures – one in which some parts of the application are built as a microservice and the remaining parts continue to remain as a monolith.  ... When using microservices architecture in your application the usual approach is to refactor the application and then implement the microservices architecture in the application.


The Inevitability of Multi-Cloud-Native Apps

Consistently delivering rapid software iteration across a global footprint forces DevOps organizations to grapple with an entirely new set of technical challenges: Leveraging containerized applications and microservices architectures in production across multiple Kubernetes clusters running in multiple geographies. Customers want an on-demand experience. This third phase is what we call multi-cloud-native, and it was pioneered by hyperscale IaaS players like Google, AWS, Azure and Tencent. The reality is, of course, that hyperscalers aren’t the only ones who have figured out how to deliver multi-cloud-native apps. Webscale innovators like Doordash, Uber, Twitter and Netflix have done it, too. To get there, they had to make and share their multi-cloud-native apps across every geography where their customers live. And, in turn, to make that happen they had to tackle a new set of challenges: Develop new tools and techniques like geographically distributed, planet-scale databases and analytics engines, application architectures that run apps on the backend close to the consumer in a multi-cloud-native way. 


DeepMind is developing one algorithm to rule them all

The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning. Like all well-grounded research, NAR has a pedigree that goes back to the roots of the fields it touches upon, and branches out to collaborations with other researchers. Unlike much pie-in-the-sky research, NAR has some early results and applications to show. We recently sat down to discuss the first principles and foundations of NAR with Veličković and Blundell, to be joined as well by MILA researcher Andreea Deac, who expanded on specifics, applications, and future directions. Areas of interest include the processing of graph-shaped data and pathfinding.


Microservices Transformed DevOps — Why Security Is Next

Microservices break that same application into tens or hundreds of small individual pieces of software that address discrete functions and work together via separate APIs. A microservices-based approach enables teams to update those individual pieces of software separately, without having to touch each part of the application. Development teams can move much more quickly and software updates can happen much more frequently because releases are smaller. This shift in the way applications are built and updated has created a second movement/change: how software teams function and work. In this modern environment, software teams are responsible for smaller pieces of code that address a function within the app. For example, let’s say a pizza company has one team (Team 1) solely focused on the software around ordering and another (Team 2) on the tracking feature of a customer’s delivery. If there is an update to the ordering function, it shouldn’t affect the work that Team 2 is doing. A microservices-based architecture is not only changing how software is created


Transitioning from Monolith to Microservices

While there are many goals for a microservice architecture, the key wins are flexibility, delivery speed, and resiliency. After establishing your baseline for the delta between code commit and production deployment completion, measure the same process for a microservice. Similarly, establish a baseline for “business uptime” and compare it to that of your post-microservice implementation. “Business uptime” is the uptime required by necessary components in your architecture as it relates to your primary business goals. With a monolith, you deploy all of your components together, so a fault in one component could affect your entire monolithic application. As you transition to microservices, the pieces that remain in the monolith should be minimally affected, if at all, by the microservice components that you’re creating. ... Suppose you’ve abstracted your book ratings into a microservice. In that case, your business can still function—and would be minimally impacted if the book ratings service goes down—since what your customers primarily want to do is buy books.



Quote for the day:

"The essence of leadership is not giving things or even providing visions. It is offering oneself and one's spirit." -- Lee Bolman & Terence Deal