Daily Tech Digest - April 24, 2021

An Insight Into Global Payment Technologies With James Booth

With the e-commerce market set to reach a predicted market volume of £92 million by 2025, and the opportunity for cross-border expansion at an all-time high, the demand for more localised and innovative payment methods will only continue to grow. More and more customers are now online, looking for products or services that suit their very specific needs. A shopper might look across borders for what they want: better-quality products, payment methods accepted, stronger brand loyalty, and more. But they will quickly abandon the transaction page if their preferred payment method is not available. Ultimately, payment choice will play a major role in driving sales in the future, meaning merchants will need a diverse payment portfolio to ensure transactions are completed and customer loyalty retained. This will continue to spark increased innovation for payments, but also the proliferation of niche local payment options across the globe. However, as digital payments head towards a global tipping point, the need for greater regulation and security will also continue to grow.


Dealing With Stubbornness Of AI Autonomous Vehicles

Shifting gears, the future of cars entails self-driving cars. This stubbornness element in the flatbed truck tale brings up an interesting facet about self-driving cars and one that few are giving much attention to. First, be aware that true self-driving cars are driven by an AI-based driving system and not by a human driver. Thus, in the case of this flatbed truck scenario, if the car had been a self-driving car, the AI driving system would have been trying to get the car up that ramp and onto the flatbed. Secondly, there are going to be instances wherein a human wants a self-driving car to go someplace, but the AI driving system will “refuse” to do so. I want to clarify that the AI is not somehow sentient since the type of AI being devised today is not in any manner whatsoever approaching sentience. Perhaps far away in the future, we will achieve that kind of AI, but that’s not in the cards right now. This latter point is important because the AI driving system opting to “refuse” to drive someplace is not due to the AI being a sentient being, and instead is merely a programmatic indication that the AI has detected a situation in which it is not programmed to drive.


Rise of APIs brings new security threat vector -- and need for novel defenses

The speed is important. The pandemic has been even more of a challenge for a lot of companies. They had to move to more of a digital experience much faster than they imagined. So speed has become way more prominent. But that speed creates a challenge around safety, right? Speed creates two main things. One is that you have more opportunity to make mistakes. If you ask people to do something very fast because there’s so much business and consumer pressure, sometimes you cut corners and make mistakes. Not deliberately. It’s just as software engineers can never write completely bug-free code. But if you have more bugs in your code because you are moving very, very fast, it creates a greater challenge. So how do you create safety around it? By catching these security bugs and issues much earlier in your software development life cycle (SDLC). If a developer creates a new API and that API could be exploited by a hacker -- because there is a bug in that API around security authentication check -- you have to try to find it in your test cycle and your SDLC. The second way to gain security is by creating a safety net. Even if you find things earlier in your SDLC, it’s impossible to catch everything. 


Will you be heading back to the office? Should you?

The vast majority said it had worked out much better than they expected. They found people were more, rather than less, productive. It turns out folks welcomed not having to deal with long commutes, crowded open-plan offices, or Dilbert-like cubicle farms. Of course, not everyone is happy. Juggling kids and office work can mean misery. But in a recent Blind professional worker social network survey of 3,000 staffers 35% said they would quit their jobs if work from home ends. That's a lot. I'd hate to replace more than a third of my staff if I insisted everyone return to 1 Corporate Drive. If your people want to work from home, and they've shown they can deliver, why take a chance on losing them? Not everyone is on board with the change. As one Microsoft staffer on Blind put it, "I don’t think the 5-day work in the office will ever be relevant again. You will have Team A and Team B, working 2 days in the office and 3 days at home. Social interaction in person is needed." Notice, though, that even here, there's no assumption of a five-day work week.


4 reasons to learn machine learning with JavaScript

Fortunately, not all machine learning applications require expensive servers. Many models can be compressed to run on user devices. And mobile device manufacturers are equipping their devices with chips to support local deep learning inference. But the problem is that Python machine learning is not supported by default on many user devices. MacOS and most versions of Linux come with Python preinstalled, but you still have to install machine learning libraries separately. Windows users must install Python manually. And mobile operating systems have very poor support for Python interpreters. JavaScript, on the other hand, is natively supported by all modern mobile and desktop browsers. This means JavaScript machine learning applications are guaranteed to run on most desktop and mobile devices. Therefore, if your machine learning model runs on JavaScript code in the browser, you can rest assured that it will be accessible to nearly all users. There are already several JavaScript machine learning libraries. An example is TensorFlow.js, the JavaScript version of Google’s famous TensorFlow machine learning and deep learning library.


4 Software QA Metrics To Enhance Dev Quality and Speed

The caliber of code is fundamental to the quality of your product. Through frequent reviews you can assess the health of your software, thus detecting unreliable code and defects in the building blocks of your project. Identifying flaws is going to help you throughout the dev process and well into the future. Good quality code will allow you to reduce the risks of defects and avoid application and website crashes. Today, much of this process can be automated, avoiding human error and diverting resources toward other tasks. But, there are a number of code quality analytics you can focus on. ... Flagging issues in the working process can draw attention to inefficiencies, allowing the opportunity to implement project management solutions. Once flaws are established, there’s a whole host of management software for small businesses and large businesses alike to improve efficiency. Automation can also help you through the testing process. According to PractiTest, 78% of organizations currently use test automation for functional or regression tests. This automation will ultimately save time and money, eliminating human error and allowing resources to be redirected elsewhere in the dev process.


5 Fundamental But Effective IoT Device Security Controls

IoT devices introduce a host of vulnerabilities into organizations’ networks and are often difficult to patch. With more than 30 billion active IoT device connections estimated by 2025, it is imperative information-security professionals find an efficient framework to better monitor and protect IoT devices from being leveraged for distributed denial or service (DDoS), ransomware or even data exfiltration. When the convenience of a doorbell camera, robot vacuum cleaner or cellphone-activated thermostat could potentially wreak financial havoc or threaten physical harm, the security of these devices cannot be taken lightly. We must refocus our cyber-hygiene mindset to view these devices as potential threats to our sensitive data. There are too many examples of threat actors gaining access to a supposedly insignificant IoT device, like the HVAC control system for a global retail chain, only to pivot to other unsecured devices on the same network before reaching valuable sensitive information. While phishing remains the most popular attack vector, reinforcing the need for humans to be an integral part of strong security program, IoT devices now offer another avenue for cybercriminals to access accounts and networks to steal data, conduct reconnaissance and further deploy malware.


Improving model performance through human participation

In order to achieve high-quality human reviews, it is important to set up a well-defined training process for the human agents who will be responsible for reviewing items manually. A well-thought-out training plan and a regular feedback loop for the human agents will help maintain the high-quality bar of the manually reviewed items over time. This rigorous training and feedback loop help minimize human error in addition to helping maintain SLA requirements for per item decisions. Another strategy that is slightly more expensive is to use a best-of-3 approach for each item that is manually reviewed, i.e., use 3 agents to review the same item and take the majority vote from the 3 agents to decide the final outcome. In addition, log the disagreements between the agents so that the teams can retrospect on these disagreements to refine their judging policies. Best practices applicable to microservices apply here as well. This includes appropriate monitoring of the following: End-to-end latency of an item from the time it was received in the system to the time a decision was made on it; Overall health of the agent pool; Volume of items sent for human review; and Hourly statistics on the classification of items.


The challenges of applied machine learning

One of the key challenges of applied machine learning is gathering and organizing the data needed to train models. This is in contrast to scientific research where training data is usually available and the goal is to create the right machine learning model. “When creating AI in the real world, the data used to train the model is far more important than the model itself,” Rochwerger and Pang write in Real World AI. “This is a reversal of the typical paradigm represented by academia, where data science PhDs spend most of their focus and effort on creating new models. But the data used to train models in academia are only meant to prove the functionality of the model, not solve real problems. Out in the real world, high-quality and accurate data that can be used to train a working model is incredibly tricky to collect.” In many applied machine learning applications, public datasets are not useful for training models. You need to either gather your own data or buy them from a third party. Both options have their own set of challenges. For instance, in the herbicide surveillance scenario mentioned earlier, the organization will need to capture a lot of images of crops and weeds.


Window Snyder Launches Startup to Fill IoT Security Gaps

In the connected device market, she sees a large attack surface and small security investment. "There are so many devices out there that don't have any of these mechanisms in place," she explains. "Even for those that do have security mechanisms, not all of them are built to the kind of resilience that's appropriate for the threats they're up against." It's a big problem with multiple reasons. Some organizations have small engineering teams and few resources to build resilience into their products. Some have large teams but don't prioritize security because they're in a closed-system manufacturing operation, for example, and the machines don't have network access. Many connected devices are in the field for long periods of time and it's hard to deliver updates, so manufacturers don't ship them unless they have to. "There's this combination of both security need and then additionally this requirement for an update mechanism that is reliable," Snyder continues. Oftentimes manufacturers lack confidence in how updates are deployed and don't trust the mechanism will deliver medium- or high-severity security updates on a regular basis.



Quote for the day:

"Authority without wisdom is like a heavy ax without an edge -- fitter to bruise than polish." -- Anne Bradstreet

Daily Tech Digest - April 23, 2021

Advanced anomaly detection: how to defeat ransomware

With perimeter defenses increasingly becoming a thing of the past, attack surfaces increasing, and adversaries becoming more capable, a managed threat detection and response (MDR) model has piqued interest in major industries. A crucial difference between MDR and traditional ransomware defenses, is MDR’s proactive response to threats. MDR is a powerful managed security service that combines threat intelligence, threat hunting, security monitoring, incident analysis, and incident response. It leverages telemetry on endpoints, monitors user behaviors, and helps produce a data-driven baseline of a business’ ‘normal’ activities, whether on premises or in the cloud. Essentially, it couples the best detection technologies and security expertise to seek out and eliminate threats before catastrophic damage occurs. Ransomware protection has been critical for businesses, especially during the pandemic. COVID-19 has proven to be a nightmare for assessing what ‘normal’ behavior looks like for organizations. Most companies lacked contingencies for adapting to the pandemic. 


Low-code and no-code won't kill developer jobs, here's why

The fact is low-code and no-code has been a term for probably 15 years, if not more in one way or another. I think I remember trying to write my first website in a low-code front page application, but what did I do? The second I did that I had to jump into the code, the HTML code to actually make it work. But we are at a different time, I think in really a unique time where we have a broad base of the workforce, the majority of the workforce now is the millennial generation or lower. So we have a younger workforce that actually grew up with technology and they've used it day in and day out. We don't really think of it as, 'Oh, well, you had apps and phones,' but that familiarity with technology has given a technical or literacy that just comes with today's day and age. Now, if you accompany that with the fact that low-code platforms are much more powerful than they were before, you have a perfect union of people who just want to get stuff done and configure out technology if you give it to them, and technology that is powerful enough, yet simple enough to leverage to really innovate on. Now, there is something you mentioned Bill, that is really important, which is enterprises have to be bought into this.


Juniper: Managing the complexity of future networks

You’ll see more self-healing, self-configuring and provisioning. Day 2 operations will be seamless, and self-correcting work will be all done in software automatically. In many ways we have already achieved these capabilities with Mist and our Wi-Fi technology that has a self-correcting mechanism. In the data center, operations will be driven by automation to eliminate errors, and find and correct particular problems. Our focus on AI has been a real shot in the arm for the company and our customers. As we pull more and more telemetry from our routers and switches, automation and AI will drive a lot more functionality into our software. The data gathered by telemetry is king. You need that kind of data to gain insights into what’s going on, how devices are working and software. You find out how the network is operating with packet capturing and the state of the cloud network, and then look for deviations. In our case, [Juniper’s AI-powered virtual assistant] Marvis in 2019 learned of network problems and could solve 20% of them without intervention. Now that number is over 80% of problems solvable automatically, in part due to all of the intelligent telemetry it gathers.


What is Blockchain? We explain the technology of blockchains

The blockchain is a constantly growing list of information. That information is in blocks, and all these blocks are linked together. Each block matches the preceding and following, and the information that the middle block contains is encrypted by an algorithm using a cryptographic function called hash. This makes this information inviolable. It is a secure, open and public database. To illustrate how the blockchain works, the metaphor of a ledger distributed among many people is often used. It would be a great book where digital events are recorded. The fundamental thing here is that this book is "distributed", that is, shared between many different parts (nodes). It can only be updated from the consensus of the majority of the system participants and, once entered, the information can never be deleted. The Bitcoin blockchain , for example, contains an accurate and verifiable record of all the transactions that have been made in its history. In other words, the authenticity of the Blockchain is not verified by a third party, but by the consensus of the whole: it is the same network of users that participates in it.


Europe lays out plan for risk-based AI rules to boost trust and uptake

The planned law is intended to apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals — so, as with the EU’s data protection regime, it will be extraterritorial in scope. The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an “excellence ecosystem” that’s aligned with European values. “Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said Commission EVP, Margrethe Vestager, announcing adoption of the proposal at a press conference. “On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”


Conversation about crossgen2

Crossgen2 is an exciting new platform addition and part of the .NET 6 release. It is a new tool that enables both generating and optimizing code in a new way. The crossgen2 project is a significant effort, and is the focus of multiple engineers. I thought it might be interesting to try a more conversational approach to exploring new features. ... Crossgen’s pedigree comes from the early .NET Framework days. Its implementation is tightly coupled with the runtime (it essentially is just the runtime and JIT attached to a PE file emitter). We are building a new version of Crossgen – Crossgen 2 – which starts with a new code base architected to be a compiler that can perform analysis and optimizations not possible with the previous version. ... As the .NET Core project became more mature and we saw usage grow across multiple application scenarios, we realized that crossgen’s limitation of only really being able to produce native code of one flavor with one set of characteristics was going to be a big problem. For example, we might want to generate code with different characteristics for Windows desktop on one hand and Linux containers on the other. The need for that level of code generation diversity is what motivated the project.


Machine Learning with ML.NET – NLP with BERT

Language is sequential data. Basically, you can observe it as a stream of words, where the meaning of each word is depending on the words that came before it and from the words that come after it. That is why computers have such a hard time understanding language because in order to understand one word you need a context. Also, sometimes as the output, you need to provide a sequence of data (words) as well. A good example to demonstrate this is the translation of English into Serbian. As an input to the algorithm, we use a sequence of words and for the output, we need to provide a sequence as well. ... During the training, process Encoder is supplied with word embeddings from the English language. Computers don’t understand words, they understand numbers and matrixes (set of numbers). That is why we convert words into some vector space, meaning we assign certain vectors (map them to some latent vector space) to each word in the language. These are word embeddings. There are many available word embeddings like Word2Vec. However, the position of the word in the sentence is also important for the context. 


How micro-segmentation creates an uphill battle for intruders

To determine just how effective micro-segmentation can be, Illumio conducted a red team exercise with Bishop Fox. The team was tasked with finding “crown jewel” assets in a test environment, and while they did not face a defensive blue team, they were pitted against increasingly tight micro-segmentation policies. The first and lowest level policy tested was environmental separation. This is a fairly course-grained approach where workloads in different environments, such as production, testing, or development, can only connect with others in the same environment. It quickly became clear that even this simple level of separation could cause attackers to take at least three times as long to reach their target. This 300-percent increase in difficulty for the intruder meant defensive tools and security personnel had much more time to detect and investigate signs of unusual activity. The next level of micro-segmentation, application ringfencing, proved to be even more effective, creating a 450-percent increase in difficulty for the attacker. At this stage, only workloads associated with specific applications could talk to each other.


Quantum: It's still not clear what it’s good for

The entire quantum industry is "still finding its way to what applications are really useful," he said. "You tend to see this list of potential applications, a heralded era of quantum computing, but I don't think we really know," he said. The Qatalyst software from QCI focuses on the kinds of problems that are of perennial interest, generally in the category of optimization, particularly constrained optimization, where a solution to a given loss function or objective function is made more complicated by having to narrow the solution to a bunch of variables that have a constraint of some sort enforced, such as bounded values. ... "They are described at a high level as the traveling salesman problem, where you have multi-variate sort of outcomes," said Liscouski. "But it's supply-chain logistics, it's inventory management, it's scheduling, it's things that businesses do today that quantum can really accelerate the outcomes in the very near future." Such problems are "a very important use case," said Moulds. Quantum computers are "potentially good at narrowing the field in problem spaces, searching through large potential combinations in a wide variety of optimization problems," he said.


Zuzana Šochová on Becoming an Agile Leader

Agile at the organizational level is changing the DNA of organizations; it brings higher autonomy of creative, innovative, and collaborative teams that are better designed to deal with complexity and the unpredictability of the VUCA challenges. It needs flexibility and quick responses to change. It breaks all fundamental beliefs that classical management was built on top of, and creates a strong need for changing leadership. Dynamic structures with no fixed design are hard to manage the traditional way, and growth of emergent leadership is inevitable. Agile leaders are catalyst and servant leaders; they are role models of a new way of working. They coach, mentor, and encourage others to become agile leaders as well. Being an agile leader is a journey, and agile leaders need to focus on helping other leaders around them grow to make agility as a whole sustainable. Having a critical mass of agile leadership is crucial for any agile environment; without it, we are only creating another process and adding terminology, and all we get is “fake agile,” not business results.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - April 22, 2021

CISA Orders Agencies to Mitigate Pulse Secure VPN Risks

CISA is ordering agencies to use the Pulse Connect Secure Integrity Tool to check the integrity of file systems and take further action as necessary. Ivanti developed the tool, which helps organizations determine if malicious activity is taking place. "CISA has determined that this exploitation of Pulse Connect Secure products poses an unacceptable risk to federal civilian executive branch agencies and requires emergency action," according to the emergency directive. "This determination is based on the current exploitation of these vulnerabilities by threat actors in external network environments, the likelihood of the vulnerabilities being exploited, the prevalence of the affected software in the federal enterprise, the high potential for a compromise of agency information systems, and the potential impact of a successful compromise." The Biden administration has been responding to a series of security incidents, including the SolarWinds supply chain attack, which led to follow-on attacks on nine government agencies and 100 companies and exploits of flaws in on-premises Microsoft Exchange email servers.


Why DevSecOps Should Strive for Effective Enforcement Measures

Applications today – especially in modern development environments – extensively use APIs to share and consume sensitive data, which are just as vulnerable and require dedicated surgical technology to make sure there is no token abuse, excessive utilization, or data theft using injections. Other than API security Many services rely on integrating or serving bots and need to make a clear distinction between the good bots and bots with malicious intent. For the sake of being accepted by AD&D, RASP is vulnerable to some attacks denial of service is just one example. From a DevOps point of view, applying security enforcement is risky. It can affect the user experience or maybe even break the flow, leading to runtime errors. The software development lifecycle (SDLC) has many blind spots in security, especially in today’s hybrid, multi-cloud architecture. For this very reason, many technologies provide alerts which is great. There is some fatigue from tools that only provide visibility. Automated security testing, vulnerability scanners of webservers, Operating Systems, and even container images come short on actual enforcement, making the developer take a few steps back and patch. When such alerts come in mass, it is much harder to prioritize and address them all.


The strange bedfellows of AI and ethics

There is a tendency to assume that computers cannot be biased – but that is not the case. AI-based systems learn from the data that they are fed. If we feed them the “wrong” data, we can inadvertently build in biases that we may not even notice. For example, historically, there have been more men than women in technology jobs. It is a very short step from that data to a position where a hiring algorithm learns that men are more likely to do a technology job, and then “decides” that men must be better than women in those jobs. The good news is that we can manage this. We can, and should, be aware of our own biases. However, we should also build diverse teams to work with AI, as a way of ensuring that we surface more of the inadvertent biases – the ones that we don’t even notice because they have become norms. It is not going to be enough to respond to developments in AI. We need to be proactive in setting up ethical safeguards to protect us all. A recent webcast from SAS Canada on AI and ethics recommends that organisations should develop a code of conduct around AI and foster AI literacy. They should also establish a diverse ethics committee to manage and oversee development and implementation processes.


REvil Ransomware Gang Threatens Stolen Apple Blueprint Leak

The extortion threat was unveiled Tuesday, hours before Apple was scheduled to make a series of major new product announcements. REvil published a number of alleged blueprints for Apple devices, which it claimed to have stolen from Taiwanese manufacturer Quanta Computer, which builds computing devices for a number of vendors. "In order not to wait for the upcoming Apple presentations, today we, the REvil group, will provide data on the upcoming releases of the company so beloved by many," the REvil gang says in a post to its data leak site. "Tim Cook can say thank you Quanta," it adds, referring to Apple's CEO. REvil claims that its previous ransom demands have been rebuffed by Quanta. "From our side, a lot of time has been devoted to solving this problem. Quanta has made it clear to us that it does not care about the data of its customers and employees, thereby allowing the publication and sale of all data we have," REvil says. Quanta and Apple didn't immediately respond to a request for comment. REvil's data-leak site further lists Cisco, Dell, HP, Siemens, Sony and Toshiba as being among the other manufacturers with which Quanta works.


Five Habits Of Highly Successful COOs

The best COOs are effective at building trust with their CEO. This trust allows them to be brutally honest with the leader of their company and gives the endless ideas created by the CEO a filter. This is not No. 1 by accident. The foundation of any great CEO and COO relationship is trust, and all the successful COO I’ve seen have a track record of building genuine trust with their CEOs and with prior teammates before climbing the ranks to second in command. This allows the CEO to confidently pass anything off of his/her plate to the COO so they can focus on the tasks that are the highest and best use of the CEO’s time. One of the most common key responsibilities of the COO is to attract, hire and retain high performers. The COO is basically the hub of the organization and it’s critical they have their finger on the talent pulse. The best-in-class COOs are always hiring. They understand that hiring top talent is one of the most important functions of the company. In addition to hiring high performers, they also spend significant time developing their highest performers. It can be so easy to focus your time and attention on only the lowest performers, but the most effective COOs take the time to continue developing the top 20% in addition to the rest of the team.


Advice for Aspiring Data Scientists

Some ideas for what to include in your portfolio: analyses, code gists, webapps, data documentation and blogs (+ README files!). You don’t need all of these by any means but if I had to choose two, I’d choose a webapp and accompanying blog post. A webapp is a great way to show your ability to link together different pieces of software and create something dynamic, hosted on the web. But why a blog? As I argued in my last post, communication is one of, if not the most important aspects of your job as a data scientist. Written communication is especially vital, and even more so if your job is remote. A well-written blog post (with linked code) allows the reader to get a sense of how you communicate, code, and think. If they get good signal from this, they will want to talk with you. This matters because getting your resume looked at is the hardest step in the job search process, so if you can increase your chances of conversion here, you’ll be in a great place. You may now be wondering how to get inspiration for your portfolio. What about starting with a cool dataset you see referenced on Twitter or Kaggle? Are there any data quality issues like leakage, truncation, missing data? How do they impact an analysis?


Cloud archiving: A perfect use case, but beware costs and egress issues

There are still issues that may inhibit the move to the cloud. While there are many examples of companies that want the move to boost operating expenditure and cut down on capital expenditure, there are instances of organisations that want to maintain the latter for accountancy reasons. And, says Betts, there are organisations that have pulled everything back from the cloud because it’s easier to control costs. Some companies have been reluctant to move to the cloud for off-site archiving because of a perceived lack of cloud skills – this may apply particularly to small and medium-sized enterprises (SMEs). But, as Betts points out, there’s still a need for skills if they’re going to implement an on-premise policy, so it’s not such a straightforward swap. SMEs may well lack some of these specialist skills too, and may find it particularly the case when adhering to GDPR compliance. It is clear there are plenty of advantages to archiving in the cloud. By freeing CIOs from the pain of choosing a hardware medium for long-term storage, moving to the cloud offers greater flexibility.


A Reference Architecture for Fine-Grained Access Management on the Cloud

The key insight underpinning this architecture is the delegation of user authentication to a single service (the Access Controller) rather than placing that responsibility with each service to which the user may need access. This kind of federation is commonplace in the world of SaaS applications. Having a single service be responsible for authentication simplifies user provisioning and de-provisioning for application owners and accelerates application development. The Access Controller itself will typically integrate with an identity provider, such as Auth0 or Okta, for the actual authentication sequence, thus providing a useful abstraction across a wide array of providers and protocols. Ultimately, the identity provider guarantees non-repudiation of the user’s identity in the form of a signed SAML assertion, a JWT token, or an ephemeral certificate. This obviates the need to rely on a trusted subnet as a proxy for the user’s identity. It also allows configuring access policies down to the granularity of a service unlike VPNs which permissively grant users access to all services on the network.


Why Big Data is Crucial for Agricultural Growth

Big data technologies have significantly increased the amount of information modern farmers possess for enhancing the efficiency of agricultural production. But what’s even more important than collecting and analyzing data is the ability to pull out the most important insights from it. The large number of variables affecting crop yield creates a wide range of possibilities for interpretation. This includes data on crop health, growth uniformity, stage of growth, etc. Having all of this data automatically collected and analyzed in one online tool enables farmers to make the most accurate predictions on crop yields. Such tools can use different algorithms for assessing the yield potential taking into account weather conditions, historical yield data, and other necessary information. Based on yield forecasts, farmers can timely perform field activities to impact it, plan its storage and sales. Ultimately, yield prediction enables growers to decide which crop to plant, where, and when, based on the accurate analysis of historical and current data. Environmental threats and global climate change have a huge impact on the agricultural sphere.


Building Confidence with Data Resilience

The first step in any digital transformation journey starts with the data and the development of a foundational storage layer. Resilience starts with data, too. It is the fuel that drives the company and it permeates every aspect of the technical infrastructure, from storage to AI, across the hybrid cloud, from the core data center to the edge. Lose data and you can lose your ability to function and, often, lose money. A recent study by KPMG found that cyber security risk will pose the greatest threat to a company’s growth over the next three years. The KPMG 2021 CEO Outlook Pulse Survey surveyed 500 CEOs in 11 markets around the world. Organizations like Pitney-Bowes, the University of California, San Francisco, and the many others are living proof of the risks. But breaches tell only part of the story. According to a recent report by Harvard Business Review, the mean time it took businesses in 2019 to discover a cyberattack was 196 days. Cloud migrations are only compounding the challenge and risk by 51%, according to the report. The point is, for most organizations, the problem is not only losing data and vital corporate information, but also not realizing it for six months.



Quote for the day:

"Always remember that you are absolutely unique. Just like everyone else." -- Margaret Mead

Daily Tech Digest - April 21, 2021

UK’s IoT ‘security by design’ law will cover smartphones too

The government introduced a security code of practice for IoT device manufacturers back in 2018 — but the forthcoming legislation is intended to build on that with a set of legally binding requirements. A draft law was aired by ministers in 2019 — with the government focused on IoT devices, such as webcams and baby monitors, which have often been associated with the most egregious device security practices. Its plan now is for virtually all smart devices to be covered by legally binding security requirements, with the government pointing to research from consumer group “Which?” that found that a third of people kept their last phone for four years, while some brands only offer security updates for just over two years. The forthcoming legislation will require smartphone and device makers like Apple and Samsung to inform customers of the duration of time for which a device will receive software updates at the point of sale. It will also ban manufacturers from using universal default passwords (such as “password” or “admin”), which are often preset in a device’s factory settings and easily guessable — making them meaningless in security terms.


MLOps Vs Data Engineering: A Guide For The Perplexed

Data engineering involves designing and building pipelines to transform data to a format end-users can understand (mainly data scientists). The pipelines collect data from different sources in a single warehouse. The data engineering job has been around for over a decade, ever since databases, SQL servers, and ETL burst into the scene. But data engineering, as we know it, gained currency at the beginning of the last decade. Companies realised they were sitting on goldmines of data, and software engineers, with the right tools, can leverage this data to drive business processes. Data engineering moved away from traditional ETL tools and developed new ones to handle swathes of data. Data engineering focuses on aspects such as data infrastructure, data warehousing, data mining, data crunching, metadata management, and data modelling. ... A survey by the International Data Corporation showed that most AI/ML projects don’t go into production, primarily because the expectations are not well communicated to the businesses or lack of skill in maintaining the production models.


A fresh recipe for digital-driven work innovation

Some years ago, we examined technology and industry trends and quickly realized that the future would be automated, with RPA’s exciting promise of enabling more work to be done much faster and better - with less. We quickly realized that the key to working better involves strategically applying a combination of RPA and AI ingredients as a recipe to liberate the best from humans, technologies and data - and then unleash this combined potential across enterprise operations. We also realized that to address their actual goals or problems, at pace and on a greater scale, organizations needed to start with defining a human intelligence strategy. We’re talking about blending the best of human, AI, RPA, systems intelligence and data as a single capability to then be conceived, conceptualized and delivered as ‘enhanced work’ solutions. We call it the ‘Connected Intelligence’ model and this uniquely innovative approach releases major commercial value in months - rather than years. This model works in the following interdependent ways. RPA runs smart software robots to provide a rapid processing function, performing joined up, data-driven, work– and at an unmatched speed, accuracy and integrity - 24x7.


Moving from Collocated Management to Remote Leadership

Remote-first is different from remote-friendly in every way. A remote-first culture fosters individual freedom and empowers collaboration through actual inclusion and equity in all parts of the work process, while a remote-friendly approach finds ways (usually the bare minimum such as video conferencing or a chat tool) to create the illusion of inclusion and collaboration, while remote workers often feel excluded, not-trusted or left out. Remote workers feeling left out in hybrid teams is a familiar concept for many, but this emotion can also be felt when the team is fully remote and operates with a collocated mindset, i.e. a remote-friendly approach. Some organizations have begun using the term “hybrid” to describe an environment in which people have a mandate to go to an office from time to time. However, the location from which people work is an entirely different topic than the operational implications of building a strong remote-first culture that is inclusive of all people regardless of their location, whether it be in an office or not.


Explained: Why soft skills are important to excel in your career?

Coordination and leadership abilities play a vital role in a modern workplace. It is crucial to address the lack of soft skills in individuals via training programs adapted to suit different sectors and levels. It is imperative to develop interpersonal skills, so that workplaces remain relevant in the challenging market, especially in the insurance industry which relies on personal equity/face-to-face communication. Effective communication enables you to build relations with people and is the key to your success in this highly competitive sector. It also helps in time management. Customer service, persuasion, and negotiation skills are the strengths individuals require in the insurance sector. Dedication and consistency in work help in boosting the team's morale and efficiency. With technology leading the world today, it is crucial that you adapt your skills as per the requirements and needs of the hour. Communication is the key to success. Enthusiasm, honesty, and conviction must complement technology. Machines can only do so much; the rest is up to human skills.


Seeing the Bigger Picture With BizDevSecOps

IT staff is under mounting pressure to perform, and they may not have the correct elements in place to support elegant and profitable applications, said Ostrowski. When it comes to monitoring, in particular, alert fatigue could especially hamper innovation attempts and make it difficult to know what to prioritize. Instead, Ostrowski advocates for letting tangible business outcomes, such as conversion rates and performance, drive development. To correlate the impact on revenue more directly, he encourages “giving business leaders a seat at table during the entire CI/CD pipeline.” One aspect to roping in business, security, development and operations is unified observability across the entire stack. But wouldn’t it be challenging to give business leaders a seat within a technical software delivery process? Well, this wouldn’t be the first time business has directed technological change. Ostrowski recalls the mid–2000s and the dawn of the mobile market. During this period, IT became consumerized for laypeople, with glossy apps and impressively intuitive experiences. He recalls a general loss of faith in IT staff, as workers often had “better tech at home than they did in the office.” 


Securing vehicles from potential cybersecurity threats

Computers in cars are not a new development, but they are controlling more and more of what is happening with them and inside them. Vehicle control is now, for example, completely computerized, and attackers could take over the level 2 ADAS systems and consequently gain full control over a vehicle’s safety-related functionalities. “Today’s sophisticated connected vehicle architecture is inherently more vulnerable to cyber attacks. Connected vehicles can host up to 150 electronic control units and run on 100 million lines of code; tomorrow’s vehicles may contain up to 300 million lines of software code. Cyber attacks that exploit the increasing digitization of vehicles present a significant risk to manufacturers, vehicle owners, other drivers and pedestrians,” Meron noted. “Each OEM tries to come up with their own defense strategy, using the variety of tools available in the market to protect from different attack vectors. Eventually they all need to manage cybersecurity of the vehicle throughout its lifecycle, from the very first day the design process commences, through production and maintenance of the vehicle, until decommissioning.”


Cyber resilience: your last line of defence

By definition, cyber resilience measures an organisation’s strength in preparing for, operating through and recovering from an attack. Only a holistic security programme will assure the resilience of an organisation and that of its customers before, during and after adverse events. Quickly identifying, responding to and recovering from security incidents is key. To achieve this, cyber resilience must rest on people and processes, as well as a combination of technologies. When assessing their security posture, businesses should look for gaps in their security capabilities from a people, processes and technology perspective, and take steps to address these. For example, if staff lack security know-how, can this be fixed by hiring or developing dedicated security experts? And how can we use training to build enhanced security awareness throughout the organisation? Processes should be clearly defined to deliver the desired security outcomes and must be repeatable and measurable. For most organisations, pinpointing weaknesses and making improvements to their processes will be an iterative journey, which should be kept under constant review.


3 Factors That Should be Driving Your Digital Transformation

Digitalization projects are a prime opportunity to finally take up some long overdue “deferred maintenance,” those pending initiatives you’ve had sitting on the shelf for months or years -- not because they aren’t important, but because they weren’t absolute priorities at the time and were consequently put off. Folding in deferred maintenance projects as part of your bigger digital transformation effort is a great way to further maximize the value of that initiative. The more projects you can execute simultaneously, the bigger the benefits. Anyone can do a technical migration or upgrade, and in vacuum, those smaller-scale endeavors won’t necessarily deliver the ROI you’d expect. But if you can combine multiple projects at once -- pairing a cloud migration with, for example, an OS/DB modernization project and the retirement of a legacy platform, all the same time -- the greater value you create, and the higher the ROI. Coalescing multiple projects into one also has other compounding benefits, from reduced testing and disruption to less downtime. All of that feeds into the project’s bottom line, making it more attractive to the business and valuable in the long run.


Nation-State Actor Linked to Pulse Secure Attacks

"Their primary goals are maintaining long-term access to networks, collecting credentials and stealing proprietary data," Charles Carmakal, senior vice president and CTO at FireEye Mandiant, says of the attackers. "We believe that multiple cyberespionage groups are using these exploits and tools, and there are some similarities between portions of this activity and a Chinese actor we call APT5." The attackers have been exploiting these vulnerabilities to compromise U.S. government agencies, critical infrastructure and private sector organizations, CISA says. FireEye adds the attacks are global, hitting a variety of government and private institutions. "The investigation shows ongoing attempts to exploit four issues: The substantial bulk of these issues involve three vulnerabilities that were patched in 2019 and 2020. Customers are strongly recommended to review the advisories and follow the guidance, including changing all passwords in the environment if impacted," Ivanti says.



Quote for the day:

"And the attitude of faith is the very opposite of clinging to belief, of holding on." -- Alan Watts

Daily Tech Digest - April 20, 2021

How to write a cyberthreat report executives can really use

Although there’s no single template for crafting a threat report, “it should look like whatever you think people will read," says deGrazia. "Senior managers get hit with lots and lots of paper, so whatever format it’s in, it has to get their attention.” CISOs also need to consider how often they want to generate these reports. Security leaders say the reports should come out on a regular schedule, whether they’re passed out weekly as Stebila did, monthly, or quarterly. The best schedule is one that matches the organization’s own cultural tempo, Rawlins says, adding that CISOs could also create and distribute customized reports to different recipients on different schedules based on the varying levels of threats and interest levels each party has. CISOs could, for example, share reports weekly with their CIOs but distribute them to the board only semi-annually. That regular schedule should not preclude sending out threat reports in response to urgent issues, security experts say. “You can’t ignore the fact that things come up, and come up quickly, and those things need to be communicated up the chain as quickly as possible,” deGrazia adds.


Consumer data protection is a high priority, but there’s still work to be done

“Most pertinently, it’s encouraging that consumer data protection is such a high priority for organizations, but there is clearly some work to be done in turning that priority into a reality in terms of what data is actually encrypted and at what points in the data lifecycle. It’s also apparent that organizations of all shapes and sizes are looking to adopt encryption for a range of new and cutting-edge use cases, which will no doubt continue to drive innovation in the industry.” “IT is tasked with deploying, tracking and managing encryption and security policy across on-premise, cloud, multi-cloud and hybrid environments, for an expanding array of uses cases, and amidst widening threats. Encryption is essential for protecting company and customer data, but managing encryption and protecting the associated secret keys are rising pain points as organizations engage multiple cloud services for critical functions,” added Grimm. “Rising use of HSMs for encryption and key management shows that IT is starting the meet these challenges. Organizations will benefit from a growing and ecosystem of integrated solutions for cloud security policy management ...”


The security impact of shadow IT

Shadow IT is also one of the reasons why strict compliance-based approaches to cyber security can only help you so far. If you are measuring patching of your internal systems as a security key performance indicator (KPI), for example, then you need to be conscious that if you have a 99% success rate at patching servers, an adversary will probably find that 1% of servers you have not patched. And if you have a 100% success rate at patching servers, you absolutely have to make sure that every server that exists is part of that measurement – if you have a server which is not enrolled in asset management and therefore not monitored in patch management processes, you could still be exposed and not be aware of it. We talk about the “advanced persistent threat” a lot in security, and it is easy to get hung up on the “advanced” part of that epithet. Although “advanced” is dangerous, what we should be most concerned about is “persistent”. You may have thousands of servers properly enrolled in your technical controls, fully-monitored and fully-patched – and one undocumented server which is not patched and not monitored.


Business Process Automation at Scale Is Key to Customer and Employee Experience

“The electronic signature is often where folks start,” asserted Casey of DocuSign. “I think that’s wonderful, obviously. But we have also started to step back and think about the systems of agreement that businesses have as a whole.” ... “Sure, automation will always cut costs—but we want to consider the experience. That’s what’s durable,” said Casey. During COVID, short-term fixes on the ground were prioritized over long-term solutions with high-level, lasting impacts. Now, the tide is beginning to shift. The benefits of full-scale automation—like better customer experiences, business agility, increased productivity, and greater security—are clearer than ever before. But what does strategic end-to-end automation look like in practice? ... Automating at scale is both technical science and change management art. For instance, close to 50% of businesses today claim that they are prepared to invest in an automated, end-to-end contract management solution, but simply don’t have the tools or know-how to do it effectively. “The problem is that end-to-end automation requires a lot of technology,” said Koplowitz.


The clash over the hybrid workplace experience

To optimize the employee experience of their hybrid workforce, employers should focus on "digital parity" as well as employee "experience parity," according to IDC. Digital parity refers to the requirement that all workers have secure access to the resources required to do their jobs, no matter their preferred device or location (office/remote/in the field). Experience parity means a democratized workplace, where all employees have the opportunity to collaborate, learn, develop, innovate and succeed, the report said. ... "Businesses everywhere must place a greater priority on enhancing employee experiences, which in turn will drive higher productivity, collaboration and better customer outcomes," said Leon Gilbert, senior vice president and general manager, Digital Workplace Services, Unisys, in a statement. "Organizations that adapt to provide digital and experience parity will not only retain employees in a competitive marketplace but will also empower those employees to provide the best service possible to their organization's customers. Do it well and you drive engagement, productivity and adaptability as new workforce demands emerge."


TCP/IP stack vulnerabilities threaten IoT devices

The actual danger to which an organization is exposed differs based on which of the vulnerable stacks it’s using. The FreeBSD vulnerability is likely more widespread – it affects millions of IT networks, including Netflix and Yahoo, as well as traditional networking devices like firewalls and routers, according to the report, but is likely easier to fix. “Those are manageable systems – we should be able to update them,” said Forrester senior analyst Brian Kime. “[And] they should be prioritized for remediation, because they’re part of your network stack.” The same cannot be said, in many cases, of the real-time operating systems affected by Name:Wreck, since the standard issues that make securing IoT devices remain in play here. The ability to patch and update firmware is still not a standard feature, and the OEMs of connected devices – which may be quite old, and may not have been designed to be Internet-facing in the first place – might not even be operating any more. In cases where those IoT devices are vulnerable, strong security has to start at the network layer, according to Hanselman. Monitoring the network directly for anomalous activity – which, again, can sometimes be difficult to detect in the case of a TCP/IP vulnerability – is a good start, but what’s really needed is techniques like DNS query protection.


The Four Fs of employee experience

To deliver an optimal employee experience (EX), we recommend focusing on four principles that we call the Four Fs. They are a set of heuristics inspired by the user-centric, iterative practice of design thinking, and they rest on the idea that your business goals, experiences, and technology are inseparable from one another and must be addressed in a unified, cross-company way. We refer to this approach as BXT (for business, experience, and technology). When applied to EX, the Four Fs unlock productivity and cut down on energy-sapping frustration stemming from internal systems and tools. They are the form, flow, feeling, and function of an employee’s work life. ... Employees can’t do their jobs well if they don’t understand what is being asked of them, the purpose of the work, or how they should prioritize their tasks. A firm we advised recently had received feedback from staff that the online training module for a new marketing curriculum it had developed was hard to follow and a bad experience overall. To address the problem, the company’s user experience team worked with PwC and a leading software firm to reimagine the employee learning interface.


Building a learning culture that drives business forward

We all think we have it. So we might say, “I’m a fast learner” or “I’m a slow learner” or “I learn in this way or that way.” But, actually, a lot of the underlying research—there are several strands of research—shows that people can actually build skills to learn new skills. We think of this as one of the most fundamental capabilities that a person can develop for themselves. It makes you better at getting better at things. It makes you better able to adapt to the changing environment that we all face these days. This idea of learning as a skill, in and of itself, is a fundamental one, and one that we talk to a lot of our clients about and, frankly, a lot of our colleagues as well. Because they’re also curious. They want to learn. But they need to be taught. Back in school, you might have thought about this as study skills. How do I organize myself in order to get my schoolwork done? But there’s a much more sophisticated version of that when you think about adult learners that I think we all need to invest in more. ... Learning to follow is listening before talking and learning how to be a contributor so that you can then lead. There are a few ways you can learn how to follow.


Concerns grow over digital threats faced from former employees

"A lot of companies fail to have clear policies or a checklist that employers use for post-employee separation. This is extremely important because failing to do so is going to involve a lot of things but the most important thing is that you want to make sure that the former employee or even a subcontractor that previously had access to the organization's technologies and systems is completely locked out," Guccione said in an interview. "It's going to avoid the risk of business disruption. It's going to avoid the risk of the leakage of intellectual property or trade secrets. It also mitigates legal risk because what you don't want is any exposure of or unauthorized access to sensitive data about the organization or its stakeholders. If a door is left open to a former member of the team and that person is disgruntled, you could have a real problem on your hands." ... In December, the Justice Department announced that a former Cisco worker was sentenced to two years in prison after he accessed the Cisco Systems cloud infrastructure that was hosted by Amazon Web Services and deleted 456 virtual machines for Cisco's Webex Teams application.


MLOps, An Insider’s Perspective: Interview With Nikhil Dhawan

Large tech firms have used data science and its various techniques to learn about consumer behaviour for a long time. They have optimised their recommendation engines, have bundled products together, improved targeting for the right customers, increased the basket size and so on. They had the budget to dedicate resources for research, partnership with academic institutes that focused highly on statistical knowledge and theory. They also had a significant engineering function to build infrastructure and tooling required to build on research outcomes. Smaller or business-focused firms don’t have this luxury. There is a big task list on any data science project that ranges from data acquisition, data ingestion, determining or starting with initial algorithms, testing multiple variants including tuning the model and hyperparameters, preparation of the datasets for each experiment, validating and comparing the outputs etc. Finally, once we get the best possible trained model, the engineering task is to deploy the model to score or predict on live data to improve business functions.



Quote for the day:

"A leader does not deserve the name unless he is willing occasionally to stand alone." -- Henry A. Kissinger

Daily Tech Digest - April 19, 2021

Time to Modernize Your Data Integration Framework

You need to be able to orchestrate the ebb and flow of data among multiple nodes, either as multiple sources, multiple targets, or multiple intermediate aggregation points. The data integration platform must also be cloud native today. This means the integration capabilities are built on a platform stack that is designed and optimized for cloud deployments and implementation. This is crucial for scale and agility -- a clear advantage the cloud gives over on-premises deployments. Additionally, data management centers around trust. Trust is created through transparency and understanding, and modern data integration platforms give organizations holistic views of their enterprise data and deep, thorough lineage paths to show how critical data traces back to a trusted, primary source. Finally, we see modern data analytic platforms in the cloud able to dynamically, and even automatically, scale to meet the increasing complexity and concurrency demands of the query executions involved in data integration. The new generation of some data integration platforms also work at any scale, executing massive numbers of data pipelines that feed and govern the insatiable appetite for data in the analytic platforms.


Will codeless test automation work for you?

While outsiders view testing as simple and straightforward, it's anything but true. Until as recently as the 1980s, the dominant idea in testing was to do the same thing repeatedly and write down the results. For example, you could type 2+3 onto a calculator and see 5 as a result. With this straightforward, linear test, there are no variables, looping or condition statements. The test is so simple and repeatable, you don't even need a computer to run this test. This approach is born from thinking akin to codeless test automation: Repeat the same equation and get the same result each time for every build. The two primary methods to perform such testing are the record and playback method, and the command-line test method. Record and playback tools run in the background and record everything; testers can then play back the recording later. Such tooling can also create certification points, to check the expectation that the answer field will become 5. Record and playbook tools generally require no programming knowledge at all -- they just repeat exactly what the author did. It's also possible to express tests visually. Command-driven tests work with three elements: the command, any input values and the expected results.


Ghost in the Shell: Will AI Ever Be Conscious?

It’s certainly possible that the scales are tipping in favor of those who believe AGI will be achieved sometime before the century is out. In 2013, Nick Bostrom of Oxford University and Vincent Mueller of the European Society for Cognitive Systems published a survey in Fundamental Issues of Artificial Intelligence that gauged the perception of experts in the AI field regarding the timeframe in which the technology could reach human-like levels. The report reveals “a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.” Futurist Ray Kurzweil, the computer scientist behind music-synthesizer and text-to-speech technologies, is a believer in the fast approach of the singularity as well. Kurzweil is so confident in the speed of this development that he’s betting hard. Literally, he’s wagering Kapor $10,000 that a machine intelligence will be able to pass the Turing test, a challenge that determines whether a computer can trick a human judge into thinking it itself is human, by 2029.


Is your technology partner a speed boat or an oil tanker?

The opportunity here really cannot be underestimated. It is there for the taking by organisations who are willing to approach technological transformation in a radically different way. This involves breaking away from monolithic technology platforms, obstructive governance procedures, and the eye-wateringly expensive delivery programmes so often facilitated by traditional large consulting firms. The truth is, you simply don’t need hundreds of people to drive significant change or digital transformation. What you do need is to adopt new technology approaches, re-think operating models and work with partners who are agile experts, who will fight for their clients' best interests and share their knowledge to upskill internal staff. Hand picking a select group of top individuals to work in this way provides a multiplier of value when compared to hiring greater numbers of less experienced staff members. Of course, external partners must be able to deliver at the scale required by the clients they work with. But just as large organisations have to change in order to embrace the benefits of the digital age, consulting models too must adapt to offer the services their clients need at the value they deserve.


Best data migration practices for organizations

the internal IT team needs to work closely with the service provider. To thoroughly understand and outline the project requirements and deliverables. This is to ensure that there is no aspect that is overlooked, and both sides are up to speed on the security and regulatory compliance requirements. Not just the vendor, but the team members and all the tools used in the migration need to meet all the necessary certifications to carry out a government project. Of course, certain territories will have more stringent requirements than others. Finally, an effective transition or change management strategy will be important to complete the transition. Proper internal communications and comprehensive training for employees will help everyone involved be aware of what’s required from them, including grasping any new processes or protocols and circumnavigating any productivity loss during the data migration. While the nitty-gritty of a public sector migration might be similar to a private company’s, a government data migration can be a much longer and unwieldy process, especially with the vast number of people and the copious amounts of sensitive data involved.


Will AI dominate in 2021? A Big Question

Agreeing with the fact that the technologies are captivating us completely with their interesting innovations and gadgets. From Artificial intelligence to machine learning, IoT, big data, virtual and augmented reality, Blockchain, and 5G; everything seems to take over the world way too soon. Keeping it to the topic of Artificial Intelligence, this technology has expanded its grip on our lives without even making us realize that fact. In the days of the pandemic, the IT experts kept working from home and the tech-grounds kept witnessing smart ideas and AI-driven innovations. Artificial Intelligence is also the new normal. Artificial Intelligence is going to be the center of our new normal and it will be driving the other nascent technologies to the point towards success. Soon, AI will be the genius core of automated and robotic operations. In the blink of an eye, Artificial Intelligence can be seen adopted by companies so rapidly and is making its way into several sectors. 2020 has seen this deployment on a wider scale as the AI experts were working from home but the progress didn’t see a stop in the tech fields.


The promise of the fourth industrial revolution

There are some underlying trends in the following vignettes. The internet of things and related technologies are in early use in smart cities and other infrastructure applications, such as monitoring warehouses, or components of them, such as elevators. These projects show clear returns on investment and benefits. For instance, smart streetlights can make residents’ lives better by improving public safety, optimizing the flow of traffic on city streets, and enhancing energy efficiency. Such outcomes are accompanied with data that’s measurable, even if the social changes are not—such as reducing workers’ frustration from spending less time waiting for an office elevator. Early adoption is also found in uses in which the harder technical or social problems are secondary, or, at least, the challenges make fewer people nervous. While cybersecurity and data privacy remain important for systems that control water treatment plants, for example, such applications don’t spook people with concerns about personal surveillance. Each example has a strong connectivity component, too. None of the results come from “one sensor reported this”—it’s all about connecting the dots. 


How Hundred-Year-Old Enterprises Improve IT Ops using Data and AIOps

Sam Chatman, VP of IT Ops at OneMain Financial, explains the impact of levering AIOps is, “Being able to understand what is released, when it’s released, and the potential impacts of that release. We are overcoming alert fatigue, and BigPanda will be our Watson of the Enterprise Monitoring Center (EMC) by automating alerts, opening incident tickets, and identifying those actions to improve our mean time to recovery. This helps us keep our systems up when our users and customers need them to be.” For other organizations, it might help to visualize what naturally happens to IT operations’ monitoring programs over time. Every time systems go down and IT gets thrown under the bus for a major incident, they add new monitoring systems and alerts to improve their response times. As new multicloud, database, and microservice technologies emerged, they add even more monitoring tools and increased observability capabilities. Having more operational data and alerts is a good first step, but then alert fatigue kicks in when tier-one support teams respond and must make sense over dozens to thousands of alerts.


A perfect storm: Why graphics cards cost so much now

Demand for gaming hardware blew up during the pandemic, with everyone bored and stuck at home. In the early days of the lockdowns in the United States and China, Nintendo’s awesome Switch console became red-hot. Even replacement controllers and some games became hard to find. ... Beyond the AMD-specific TSMC logjam, the chip industry in general has been suffering from supply woes. Even automakers and Samsung have warned that they’re struggling to keep up with demand. We’ve heard whispers that the components used to manufacture chips—from the GDDR6 memory used in modern GPUs to the substrate material fundamentally used to construct chips—have been in short supply as well. Seemingly every industry is seeing vast demand for chips of all sorts right now. ... High demand and supply shortages are the perfect recipe for folks looking to flip graphics cards and make a quick buck. The second they hit the streets, the current generation of GPUs were set upon by “entrepreneurs” using bots to buy up stock faster than humans can, then selling their ill-gotten wares for a massive markup on sites like Ebay, StockX, and Craigslist.


How to sharpen machine learning with smarter management of edge cases

Production is when AI models prove their value, and as AI use spreads, it becomes more important for businesses to be able to scale up model production to remain competitive. But as Shlomo notes, scaling production is exceedingly difficult, as this is when AI projects move from the theoretical to the practical and have to prove their value. “While algorithms are deterministic and expected to have known results, real world scenarios are not,” asserts Shlomo. “No matter how well we will define our algorithms and rules, once our AI system starts to work with the real world, a long tail of edge cases will start exposing the definition holes in the rules, holes that are translated to ambiguous interpretation of the data and leading to inconsistent modeling.” That’s much of the reason why more than 90% of c-suite executives at leading enterprises are investing in AI, but fewer than 15% have deployed AI for widespread production. Part of what makes scaling so difficult is the sheer number of factors for each model to consider. In this way, HITL enables faster, more efficient scaling, because the ML model can begin with a small, specific task, then scale to more use cases and situations.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren