Daily Tech Digest - February 26, 2022

How To Study and Learn Complex Software Engineering Concepts

Chunking is a powerful technique to learn new concepts by breaking big and complex subjects down into smaller, manageable units that represent the core concepts you need to master. Let’s say you would like to start your Data Science journey. Grab a book or find a comprehensive online curriculum on the subject and begin by scanning the table of contents and skim-reading the chapters by browsing the headers, sub-headers and illustrations. This allows you to get a feel of what material you are about to explore and make mental observations on how it is organised as well as start appreciating what the big picture looks like, so you can then fill in the details later. After this first stage, you need to start learning the ins and outs of the individual chunks. It is not as intimidating as you originally thought, as you have already formed an idea of what you will be studying. So, carrying on our previous example, you can go through the book chapters in-depth, and then supplement your knowledge by looking at Wikipedia, watching video tutorials, finding online resources, and taking extensive notes along the way. 


RISC-V AI Chips Will Be Everywhere

The adoption of RISC-V, a free and open-source computer instruction set architecture first introduced in 2010, is taking off like a rocket. And much of the fuel for this rocket is coming from demand for AI and machine learning. According to the research firm Semico, the number of chips that include at least some RISC-V technology will grow 73.6 percent per year to 2027, when there will be some 25 billion AI chips produced, accounting for US $291 billion in revenue. The increase from what was still an upstart idea just a few years ago to today is impressive, but for AI it also represents something of a sea change, says Dave Ditzel, whose company Esperanto Technologies has created the first high-performance RISC-V AI processor intended to compete against powerful GPUs in AI-recommendation systems. According to Ditzel, during the early mania for machine learning and AI, people assumed general-purpose computer architectures—x86 and Arm—would never keep up with GPUs and more purpose-built accelerator architectures.


Sustainable architectures in a world of Agile, DevOps, and cloud

Driving architectural decisions is an essential activity in Continuous Architecture, and architectural decisions are the primary unit of work of a practitioner. Almost every architectural decision involves tradeoffs. For example, a decision made to optimize the implementation of a quality attribute requirement such as performance may negatively impact the implementation of other quality attributes, such as usability or maintainability. An architectural decision made to accelerate the delivery of a software system may increase technical debt, which needs to be “repaid” at some point in the future and may impact the sustainability of the system. Finally, all architectural decisions affect the cost of the system, and compromises may need to be made in order to meet the budget allocated to that system. All tradeoffs are reflected in the executable code base. Tradeoffs often are the least unfavorable ones rather than the optimal ones because of constraints beyond the team’s control, and decisions often need to be adjusted based on the feedback received from the system stakeholders.


6 Cyber-Defense Steps to Take Now to Protect Your Company

Modern device management is an essential part of increasing security in remote and hybrid work environments. A unified endpoint management (UEM) approach fully supports bring-your-own-device (BYOD) initiatives while maximizing user privacy and securing corporate data at the same time. UEM architectures usually include the ability to easily onboard and configure device and application settings at scale, establish device hygiene with risk-based patch management and mobile threat protection, monitor device posture and ensure compliance, identify and remediate issues quickly and remotely, automate software updates and OS deployments, and more. Choose a UEM solution with management capabilities for a wide range of operating systems, and one that is available both on-premises and via software-as-a-service (SaaS). ... Companies should look to combat device vulnerabilities (jailbroken devices, vulnerable OS versions, etc.), network vulnerabilities and application vulnerabilities (high security risk assessment, high privacy risk assessment, suspicious app behavior, etc.).


Europe proposes rules for fair access to connected device data

The Data Act looks to be a key component of the EU’s response to that threat. ...  Secondly, the Commission is concerned about abusive contractual terms being imposed on smaller companies by more powerful platforms and market players to, essentially, extract the less powerful company’s most valuable data — so the Data Act will bring in a “fairness test” with the goal of protecting SMEs against unfair contractual terms. The legislation will stipulate a list of unilaterally imposed contractual clauses that are deemed or presumed to be unfair — such as a clause that states a company can unilaterally interpret the terms of the contract — and those that do not pass the test will be not be binding on SMEs. The Commission says it will also develop and recommend non-binding model contractual terms, saying these standard clauses will help SMEs negotiate “fairer and balanced data sharing contracts with companies enjoying a significantly stronger bargaining position” Some major competition complaints lodged against tech giants in the EU have concerned their access to third party data, such as the investigation into Amazon’s use of merchants data


Mind of its own: Will “general AI” be like an alien invasion?

Yes, we will have created a rival and yet we may not recognize the dangers right away. In fact, we humans will most likely look upon our super-intelligent creation with overwhelming pride — one of the greatest milestones in recorded history. Some will compare it to attaining godlike powers of being able to create thinking and feeling creatures from scratch. But soon it will dawn on us that these new arrivals have minds of their own. They will surely use their superior intelligence to pursue their own goals and aspirations, driven by their own needs and wants. It is unlikely they will be evil or sadistic, but their actions will certainly be guided by their own values, morals, and sensibilities, which will be nothing like ours. Many people falsely assume we will solve this problem by building AI systems in our own image, designing technologies that think and feel and behave just like we do. This is unlikely to be the case. Artificial minds will not be created by writing software with carefully crafted rules that make them behave like us. 


5 ITSM hurdles and how to overcome them

Unclear communication makes it far more difficult to explain the value of ITSM to the business, to properly organize ITSM efforts, to set expectations for its deployment and to secure proper funding for it. Hjortkjær suggests using the CMDB to map IT components to business applications, assign ownership of those applications to both IT and business sponsors, and ask those sponsors to explain the role of each application to the business, as well as how best to use it and eventually when to replace it. Thomas Smith, director of telecommunications and IT support at funeral goods and services provider Service Corp. International, recommends being candid about schedules. “One of the biggest mistakes we made in the past, and still make, is to say `We’re going to get it done in three months.’ Four months later, everyone is still hoping for three months,” he says. Understand any deficiencies in your ITSM tool or services, he recommends, “and tell the business process owners `We have a plan to address it.’” Calvo says the terms of SLAs, such as those it created using BMC’s HelixITSM platform, can help set expectations and reduce frustration from users who “think everything should be solved ASAP.”


Data Mapping Best Practices

Many applications share the same pattern of naming common fields on the frontend but under the hood, these same fields can have quite different labels. Consider the field “Customers”: in the source code of your company’s CRM, it might still have the label “customers”, but then your ERP system calls it “clients”, your finance tool calls it “customer” and the tool your organization uses for customer messaging will map it “users” altogether. This is one of the probably most common data mapping examples for this label conundrum. To add to the complexity, what if a two-field data output from one system is expected as a one-field data input in another or vice versa? This is what commonly happens with First Name / Last Name; a certain customer “Allan” “McGregor” from your eCommerce system will need to become “Allan McGregor” in your ERP. Or my favorite example: the potential customer email address submitted through your company’s website will need to become “first-name: Steven”, "last-name: Davis” and “company: Rangers” in your customer relationships management tool.
 

How to perform Named Entity Recognition (NER) using a transformer?

Named entities can be of different classes like Virat Kohli is the name of a person and Lenovo is the name of a company. The process of recognizing such entities with their class and specification can be considered as Named Entity Recognition. In traditional ways of performing NER, we mostly find usage of spacy and NLTK. There can be a variety of applications of NER in natural language processing. For example, we use this for summarizing information from the documents and search engine optimization, content recommendation, and identification of different Biomedical subparts processes. In this article, we aim to make the implementation of NER easy and using transformers like BERT we can do this. Implementation of NER will be performed using BERT, so we are required to know what BERT is, which we will explain in our next section. In one of the previous articles, we had a detailed introduction to BERT. BERT stands for Bidirectional Encoder Representations from Transformers. It is a famous transformer in the field of NLP. This transformer is a pre-trained transformer like the others.


Using artificial intelligence to find anomalies hiding in massive datasets

To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample. They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns inartifi the data and estimate anomalies more accurately, Chen explains. “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says. This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. 



Quote for the day:

"It's not about how smart you are--it's about capturing minds." -- Richie Norton

No comments:

Post a Comment