Daily Tech Digest - April 26, 2020

Can computers become conscious?


AI-hard problems are hypothesized to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. As it stands, AI-hard problems cannot be solved with current computer technology alone. They still require human intervention, and probably always will. Following this trend, AI will not become self-aware. So, the doomsday conspiracy theorists are wrong. AI will not become the dominant form of intelligence on Earth, with computers and robots taking over the world. Still, there’s nothing wrong with taking a few precautionary measures to ensure that future superintelligent machines remain under human control. However, I don’t think a robot uprising is possible. Nonetheless, there are those who believe that machines have minds or soon will. This is why scientists have developed a number of experiments to test AI, to find out what the limits of artificial intelligence are.



Custom Response Caching Using NCache in ASP.NET Core

Response caching enables you to cache the server responses of a request so that the subsequent requests can be served from the cache. It is a type of caching in which you would typically specify cache-related headers in the HTTP responses to let the clients know to cache responses. You can take advantage of the cache control header to set browser caching policies in requests that originate from the clients as well as responses that come from the server. As an example, cache-control: max-age=90 implies that the server response is valid for a period of 90 seconds. Once this time period elapses, the web browser should request a new version of the data. The key benefits of response caching include reduced latency and network traffic, improved responsiveness, and hence improved performance. Proper usage of response caching can lower bandwidth requirements and improve the application’s performance. You can take advantage of response caching to cache items that are static and have minimal chance of being modified, such as CSS, JavaScript files, etc.


It's a great time to tackle core IT upgrades


There are hundreds of thousands of security patches out there, but Vulcan will tell you that a few of the important ones will eliminate many related security issues. A little work now goes a long way -- if you know what to do. With consumers and business buyers stuck at home, it is the e-commerce side of a business that is super important. During the important fourth-quarter holiday sales season, companies won't risk making any changes to their e-commerce systems. Now things are reversed, it is the main IT systems that can be upgraded and patched with less risk of downtime problems. But don't mess with the e-commerce systems. Vulcan's platform is designed to scale and to interface with all the standard IT tools. It makes heavy use of machine learning and also human intelligence -- IT experts that can analyze new security threats and solutions. And sometimes a patch isn't needed and a simple workaround will eliminate dozens of related issues, says Bar-Dayan. Vulcan's reports identify the top vulnerabilities and the detailed remediation steps necessary. It is a huge time-saver for cybersecurity teams.


Shadow Broker leaked NSA files point to unknown APT group


Juan Guerrero-Saade, a security researcher and adjunct professor at Johns Hopkins University’s School of Advanced International Studies, wasn’t convinced, arguing that misleading files make their way onto VirusTotal all the time. He realised that the file in question was a 15Mb memory dump of a McAfee installer. In short, it’s a red herring. Investigating godown.dllfurther, he found that the file was a drop from a larger multi-stage infection framework. The tools and techniques that the framework used indicated a unique cluster of activity. It pointed to an advanced persistent threat group that wasn’t publicly known until now. Although it’s difficult to directly attribute the attack to a specific actor, Guerrero-Saade noted that some of the resources in the files mention Farsi (Persian), which is native to countries including Iran. The name used in the root debug path, c:/khzer, apparently means ‘to survey or monitor’ according to friends of his that are acquainted with the language, and so he decided to call the attack group Nazar, after the heart-shaped amulet supposed to protect people against the evil eye in many countries across the middle east.


The true costs incurred by businesses for technology downtime

technology downtime
The research, conducted by Vanson Bourne, which surveyed 1,000 senior IT decision-makers and 2,000 end users at organizations with at least 1,500 employees across the U.S., the U.K., France, and Germany, shows that employees are losing an average of 28 minutes every time they have an IT-related problem. The report also shows that IT decision makers believe employees are experiencing approximately two IT issues per week, wasting nearly 50 hours a year. However, as only just over half of IT issues are being reported, the numbers are more likely to be nearly double that – close to 100 hours (two work weeks) a year. This has led to a vicious cycle of employees trying to fix IT problems on their own, leading to less engagement with the IT department, which doesn’t have visibility into how the technology is being consumed. There exists a major disconnect between IT departments and employees, with 84% of employees believing that their organizations should be doing more to improve the digital experience at work. However, a staggering 90% of IT leaders believe that workers are satisfied with technology in the workplace, highlighting the discrepancy between perception and reality of the digital employee experience.


Judges and lawyers learn Zoom rules in real time during coronavirus crisis


Ines Swaney, a certified Spanish interpreter, said her first experience with Zoom was a three-way conversation during a legal visit between an attorney in one city, the attorney's incarcerated client joining the conversation from jail in another city, and herself serving as an interpreter in a third city. One drawback with the Zoom platform is that it forces an interpreter to use consecutive interpreting instead of simultaneous interpreting, which is the preferred approach. Swaney said that online platforms also need to allow private conversations between an attorney and the judge, and among an attorney, client and interpreter who may need to speak privately for a brief period of time during a hearing. Tony Sirna, legal strategist and customer success manager at Verbit, said there are serious considerations the courts are working through, particularly ensuring due process with remote proceedings, technology interruptions, unauthorized recordings, exhibits, and the impact virtual appearances will have on defendants, for example.  Sirna said in addition to standardizing software and recording technology, courts need to agree on procedural best practices, such as how exhibits and stipulations will be handled remotely.


Text ‘bomb’ crashes iPhones, iPads, Macs and Apple Watches – what you need to know

Text 'bomb' crashes iPhones, iPads, Macs and Apple Watches - what you need to know
The problem appears to exists in how the latest shipping versions of Apple’s operating system handle a Unicode symbol representing specific characters written in Sindhi, an official language in part of Pakistan. The problem occurs most irritatingly when your device attempts to display a message notification. If you have configured your iPhone, for instance, to display a new message notification which includes a preview of the message, then iOS fails to properly render the characters and crashes with unpredictable results. You may find the only way to get around the problem is to completely reboot your device – but there is always the risk that you will receive a new boobytrapped notification. The problem can also manifest itself inside apps. For instance, some mischievous Twitter users have tweeted the offending characters causing other users to have their devices crash. Android users, meanwhile, are unaffected – and can watch the chaos with bemusement. Some of the earliest reports suggested that for the attack to work the Sindhi characters had to be used in conjunction with an Italian flag emoji.


What Is Agile Enterprise Architecture? Just Enough, Just in Time

Agile is based on the concept of “just in time.” You can see this in many of the agile practices, especially in DevOps. User stories are created when they are needed and not before, and releases happen when there is appropriate value in releasing, not before and not after. Additionally, each iteration has a commitment that is met on time by the EA team. EA is missing the answer to the question of “what exactly is getting delivered?” This is where we introduce the phrase “just enough, just in time” because stakeholders don’t just simply want it in time, they also want just enough of it — regardless of what it is. This is especially important when communicating with non-EA professionals. In the past, enterprise architects have focused on delivering all of the EA assets to stakeholders and demonstrating the technical wizardry required to build the actual architecture. ... Create a marketing-style campaign to focus on EA initiatives, gathering and describing only what is required to satisfy the goal of the campaign.


Safe shopping: Your best options for NFC and contactless payments


Near-Field Communications, or NFC, is a technology built-in to many modern families of mobile devices, such as the iPhone, the Samsung Galaxy, Google Pixel, and many other Android smartphones. NFC, introduced in 2002, allows contactless data transfer between mobile devices and can to emulate a credit card for payments at POS terminals in retail stores. NFC lets the user pass their smartphone device over a payment terminal at a retailer in order to complete the purchase, provided that a supported "e-Wallet" platform is used. Keep in mind, however, that NFC still requires you get relatively close to the payment terminal and the person running it, and may even require you physically interact with a keypad or virtual keypad/screen to initiate a transaction -- so wear gloves or have the employee initiate the transaction on your behalf, and if you have to touch the terminal, do not touch your face, and wash your hands immediately afterward. Be sure you maintain safe distances when using it, or shop where there is a plexiglass barrier between you and the retail employee.


Go as a Scripting Language

Go's growing adoption as a programming language that can be used to create high-performance networked and concurrent systems has been fueling developer interest in its use as a scripting language. While Go is not currently ready "out of the box" to be used as a replacement for Bash or Python, this can be done with a little effort. As Codelang's Elton Minetto explained, Go has quite some appeal to be used as a scripting language, including its power and simplicity, support for goroutines, and more. Google software engineer Eyal Posener adds more reasons to adopt Go as a scripting language, such as the availability of a rich set of libraries and the language terseness, which makes maintenance easier. ... Being able to use the same language for day-to-day tasks and less frequent scripting task would greatly improve efficiency. Go is also a strongly typed language, notes Cloudflare engineer Ignat Korchagin, which can help to make Go scripts more reliable and less prone to runtime failure due to such trivial errors as typos.



Quote for the day:


"A leader is one who sees more than others see and who sees farther than others see and who sees before others see." -- Leroy Eimes


Daily Tech Digest - April 25, 2020

pharming  >  faudulent website redirect
A pharming attack tries to redirect a website's traffic to a fake website controlled by the attacker, usually for the purpose of collecting sensitive information from victims or installing malware on their machines. Attackers tend to focus on creating look-alike ecommerce and digital banking websites to harvest credentials and payment card information. These attacks manipulate information on the victim’s machine or compromise the DNS server and rerouting traffic, the latter of which is much harder for users to defend against. Though they share similar goals, pharming uses a different method from phishing. “Pharming attacks are focused on manipulating a system, rather than tricking individuals into going to a dangerous website,” explains David Emm, principal security researcher at Kaspersky. “When either a phishing or pharming attack is completed by a criminal, they have the same driving factor to get victims onto a corrupt location, but the mechanisms in which this is undertaken are different.” Pharming attacks involve redirecting user requests by manipulating the Domain Name Service (DNS) protocol and rerouting the target from its intended IP address to one controlled by the hacker. This can be done in two ways.


Technology, Financial Inclusion, and Banking in Frontier Markets

The lack of local knowledge of emerging and frontier markets can make it exceptionally difficult to serve those with limited infrastructure in the right way. A strong understanding of local financial processes and more complex environments are vital to providing financial services in hard-to-reach territories. It also helps to build trust and relationships with key organizations in that region. Where the relationship becomes mutually reinforced is when financial inclusion increases and we get more data on people within the market. As we understand consumer behaviors and markets are better understood, more players are willing to serve them and we are able to reach more people with financial services. When the two complement each other well, we can make a real difference in improving access to these services. While 72% of founders say that diversity in their startup is extremely or very important, only 12% of startups are diversity leaders in practice.


Software Testing: The Comeback Kid of the 2020s

Software Testing
Ultimately, developers don’t have the time or desire to keep these tests current over the long term. Unit testing has been a best practice for more than 20 years, yet despite waves of unit test automation tools (including one created by Albert Savoia not long before he declared testing dead), unit testing remains a thorn in developers’ sides. Does that mean we give up the benefits of unit testing altogether? Not necessarily. In order to take on unit testing per se, testers would need to understand the developers’ code as well as write their own code. That’s not going to happen. But, you could have testers compensate for lost unit test coverage through resilient tests they can create and control. Professional testers recognize that designing and maintaining tests is their primary job and that they are ultimately evaluated by the success and effectiveness of the test suite. Let’s be honest, who’s more likely to keep tests current, the developers who are pressured to deliver more code faster, or the testers who are rewarded for finding major issues (or blamed for overlooking them)?


Checking AI bias is a job for the humans

Checking AI bias is a job for the humans
Machine learning models are only as smart as the datasets that feed them, and those datasets are limited by the people shaping them. This could lead, as one Guardian editorial laments, to machines making our same mistakes, just more quickly: “The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster?” Complicating matters further, our own errors and biases are, in turn, shaped by machine learning models. As Manjunath Bhat has written, “People consume facts in the form of data. However, data can be mutated, transformed, and altered—all in the name of making it easy to consume. We have no option but to live within the confines of a highly contextualized view of the world.” We’re not seeing data clearly, in other words. Our biases shape the models we feed into machine learning models that, in turn, shape the data available for us to consume and interpret.


Starbleed vulnerability: Attackers can gain control over FPGAs

Starbleed
Attackers can gain complete control over the chips and their functionalities via the vulnerability. Since the bug is integrated into the hardware, the security risk can only be removed by replacing the chips. The manufacturer of the FPGAs has been informed by the researchers and has already reacted. FPGA chips can be found in many safety-critical applications, from cloud data centers and mobile phone base stations to encrypted USB-sticks and industrial control systems. Their decisive advantage lies in their reprogrammability compared to conventional hardware chips with their fixed functionalities. This reprogrammability is possible because the basic components of FPGAs and their interconnections can be freely programmed. In contrast, conventional computer chips are hard-wired and, therefore, dedicated to a single purpose. The linchpin of FPGAs is the bitstream, a file that is used to program the FPGA. In order to protect it adequately against attacks, the bitstream is secured by encryption methods. Dr. Amir Moradi and Maik Ender from Horst Görtz Institute, in cooperation with Professor Christof Paar from the Max Planck Institute in Bochum, Germany, succeeded in decrypting this protected bitstream, gaining access to the file content and modifying it.


How To Secure 5G — And The Internet Of Things Too

Internet of Things connectivity
“From a cybersecurity standpoint, things haven’t really changed that much,” he said, “so, the challenges remain the same.” As he told PYMNTS, the key challenge is to make sure that the systems and devices are better than reasonably secure before they go on the 5G network in the first place. That challenge is intensifying as 4G gets ready to give way to 5G. Adding devices boosts vulnerability, he said. Each one of those devices represents a possible point of attack for hackers and fraudsters. There are hundreds of millions of devices now that can, conceivably, be compromised, in some way — and there will be billions of devices in the future. The challenges of cybersecurity, he said, are the same whether from the standpoint of a manufacturer building an Internet of Things (IoT) device or from a healthcare company that is building devices that will be used by providers or a telecom company building network equipment. “The key question,” Knudsen said, “is how do you build that system or device in a way that minimizes risk?”


Blockchain Revolutionizing Banking and Financial Markets


If a change is to be made in a particular block, it is not rewritten. Instead, a new block is created which contains the cryptographic hash of the previous block, the amended data, and the timestamp. Hence, it is a non-destructive way to track data changes over time. In addition, Blockchain is distributed over a large network of computers and is decentralized which reduces the tampering of data. Now, before a block is added to the Blockchain each person maintaining a ledger has to solve a special kind of math problem created by a cryptographic hash function. Whoever solves the hash first gets to add the ledger to the block chain. Blockchain can also be private, public and even hybrid private-public. Hence, Blockchain can literally revolutionize the way we access, verify and transact our data with one another. ... Blockchain has come up with a peer-to-peer effective solution for lenders and borrowers without any involvement of third parties. A Spanish Bank gave the first crypto-loan service in 2018.They are fast (takes less than 48 hours), have much cheaper operational costs, are more secure and transparent.


How to become a data scientist without getting a Ph.D

Doctor with Medical Healthcare Research Concept
There are data scientists at every company. Instituting a mentor program, for example, combined with a continuous learning curriculum can greatly improve data fluency across an organization. And this is no longer an option — it's an imperative. Data is king in business. Data science is a means by which you can use data to make business decisions. Without the basic data science skills, employees can't make these important decisions.  As your team becomes more comfortable with the language of data, they'll be more comfortable bringing data to bear on important business decisions. It will become clear that some team members are more comfortable using data skills than others are. Encourage the proficient ones to mentor others. Even at DataCamp, where data science is our business, some people don't work with data continuously. When they need help on a complex problem, they pair up with those who do.  It's all about shared tools, skills and responsibilities — they can dramatically improve communication and understanding between employees, which ultimately improves workplace culture.


Multi-Cloud Cost Optimization For The Enterprise

Multi-Cloud Cost Optimization
How much will the public cloud cost? You should begin your cloud cost management strategy by looking at the public cloud providers’ billing models—just like any other IT service, the public cloud can introduce unexpected charges. How much storage, CPU and memory do your applications require currently? Which cloud instances would meet those requirements? Then, it’s a question of estimating how much those applications would cost in the cloud and comparing these figures to how much it currently costs you to run them on-premise. If you plan to use multiple public cloud providers, integration and other factors can lead to unexpected fees—try and plan application deployments to see where you might be liable for extra costs.  Initially, it seems that most vendors offer similar packages and prices—when you examine them in detail however, perhaps one vendor has a much lower price for certain types of workloads. Understand your business requirements before committing to a cloud vendor, and avoid vendor lock-in.


5 ways to empower remote development teams


QA teams that have never tested remotely must surmount technical, process-oriented and cultural challenges. Issues include how to collaborate virtually, procure off-site resources and manage asynchronous work schedules. Adjustments to workplace culture can help just as much as -- if not more than -- new tools. Follow these best practices for remote QA work from Gerie Owen, an experienced test manager. For example, communicate more frequently with team members, with more detail and context than usual. Owen also offers advice for organizations that lack sufficient network capacity for remote QA resources. ... Many enterprises must make distributed Agile development work. Read how to manage distributed Agile development and its various challenges, as detailed by software architect and technical advisor Joydip Kanjilal. He outlines, for example, what practices a remote development team can adopt to fulfill the values and principles of Agile. To improve camaraderie, a team might host regular video conferences.



Quote for the day:


"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer


Daily Tech Digest - April 24, 2020

Data: The Fabric of Developers’ Lives

Data fabric_developers
Storage-as-a-Service—we hardly knew about it. Thanks in large part to containers, which offer exceptional scalability, simplicity and high availability, the speed of application development has increased dramatically. Developers need to be able to quickly provision their own data, in just the right amounts, to match that velocity. And, like containers, that data needs to be portable. Provisioning quickly means no more going through storage administrators to get the services they need, which can be a cumbersome and time-consuming process. Solutions like Kubernetes’ on-demand clusters enable developers to procure the data they need when they need it. The abstraction layer provided by a data fabric can empower developers even further. They can write their own APIs, provision data services as needed and move that data between clouds with ease. This is particularly important when dealing with cloud providers that offer different services. Sometimes a developer may need a service that exists in one cloud but not another. It’s critical to have an underlying storage infrastructure that enables applications and their data to be transferred as needs require.


Remember when open source was fun?

When Daniel Stenberg set out to make currency exchange rates available to IRC users, he wasn’t trying to “do open source.” It was 1996 and the term “open source” hadn’t even been coined yet (that came in February 1998). No, he just wanted to build a little utility (“how hard can it be?”), so he started from an existing tool (httpget), made some adjustments, and released what would eventually become known as cURL, a way to transfer data using a variety of protocols. It wasn’t Stenberg’s full-time job, or even his part-time job. “It was completely a side thing,” he says in an interview. “I did it for fun.” Stenberg’s side project has lasted for over 20 years, attracted hundreds of contributors, and has a billion users. Yes, billion with a B. Some of those users contact him with urgent requests to fix this or that bug. Their bosses are angry and they need help RIGHT NOW. “They are getting paid to use my stuff that I do at home without getting paid,” Stenberg notes. Is he annoyed? No. “I do it because it’s fun, right? So I’ve always enjoyed it. And that’s why I still do it.”


New research by the data protection and management software supplier has found 5.8 million tonnes of carbon dioxide will be pumped into the atmosphere this year resulting from the use of storage systems to house and process dark data. Veritas derived the figure by mapping industry data on power consumption from data storage, industry data on emissions from datacentres and its own research. On average, 52% of all data stored by organisations worldwide is likely to be dark data, according to Veritas. With the amount of data growing from 33 zettabytes in 2018 to 175 zettabytes by 2025, there will be 91 zettabytes of dark data in five years’ time – over four times the volume of dark data today. Ravi Rajendran, vice-president and managing director for the Asia South region at Veritas Technologies, said that although companies are trying to reduce their carbon footprint, dark data is often neglected. And with dark data producing more carbon dioxide than 80 countries do individually, Rajendran called for organisations to start taking it seriously. 


How different generations approach remote work

Maybe it's more millennials that are really pushing the work from home, but if you would think it would be more of your generation. I say that I'm Gen X. Veronica and I both are, of course. But, you would think that it'd be the younger ones that would be all for working from home, to have that freedom. ... When I'm in an office, as you both know, I tend to be a bit of a chatterbox, so it's good for me to have that alone time to really lock things down. But it's different for people. But, Veronica, you and I would be able to speak on this for Gen X, at least, in the research that I saw, NRG found that most Gen X-ers enjoyed working from home because they were really comfortable, and they liked that independence. And they also liked being around their families, and having that quality time, and felt a little more relaxed. Would you say that's accurate? ... You can get up and take a break whenever, and reset your brain to shift tasks, or to find inspiration if you're stuck on something. I think if you can close the door or close your family off, it's OK. My kids are older now, but if they were little, it would be so hard to work from home now. I have an 11-year-old and a 15-year-old, so they can make their own lunch, and walk the dog, and be self-sufficient while I'm down here.



Netgear is ahead of the game with its WiFi 6 router portfolio and it is paying off as the company is seeing a surge in home network upgrades. The catch for Netgear is that its supply chain, sales channels and markets have all been upended by the COVID-19 pandemic. CEO Patrick Lo outlined the moving parts of Netgear's first quarter. We saw two distinct phenomena during the Covid-19 pandemic. Whenever a shelter in place lockdown was declared, business activities fell and demand for our SMB products dropped significantly. At the same time, consumers are quickly finding out that high performance WiFi at home is a necessity and are rushing to upgrade their home WiFi, driving upticks in our consumer WiFi and mobile hotspot sales. We also saw significant channel shift from physical retail channel purchases to online purchases which put strain on the logistics of some of our online sales partners. On an earnings conference call, it became clear that Netgear had a lot to navigate as it pulled its guidance due to COVID-19. The company reported a first quarter net loss of $4.17 million on revenue of $229.96 million, down from $249 million a year ago. On a non-GAAP basis, Netgear's earnings of 21 cents a share were a nickel better than estimates.


Researchers say deep learning will power 5G and 6G ‘cognitive radios’


For decades, amateur two-way radio operators have communicated across entire continents by choosing the right radio frequency at the right time of day, a luxury made possible by having relatively few users and devices sharing the airwaves. But as cellular radios multiply in both phones and Internet of Things devices, finding interference-free frequencies is becoming more difficult, so researchers are planning to use deep learning to create cognitive radios that instantly adjust their radio frequencies to achieve optimal performance. As explained by researchers with Northeastern University’s Institute for the Wireless Internet of Things, the increasing varieties and densities of cellular IoT devices are creating new challenges for wireless network optimization; a given swath of radio frequencies may be shared by a hundred small radios designed to operate in the same general area, each with individual signaling characteristics and variations in adjusting to changed conditions. The sheer number of devices reduces the efficacy of fixed mathematical models when predicting what spectrum fragments may be free at a given split second.


Outsourced DevOps brings benefits, and risks, to IT shops


When IT teams outsource DevOps planning to a third-party service provider, it only exacerbates existing planning issues. Another option is to hire a contract Scrum Master or product manager with DevOps experience to work with the in-house teams. Either way, proceed with an end game of knowledge transfer to build in-house planning expertise. Depending on the organization's attitude toward contractors, the addition of an outside contractor to work on planning can bring some cultural challenges. Some organizations treat contractors as valued members of the team, while others treat them as outsiders -- which makes it challenging to have a contractor in any subject matter expert position. Planning tools, however, are ripe for outsourcing. For example, if an organization lacks the in-house expertise to implement and maintain Atlassian Jira or another planning tool, it can outsource that platform and use a managed version. While it's more common to outsource the build phase of DevOps than it is the planning phase, it still has risks.


Tech Leaders Map Out Post-Pandemic Return to Workplace

Businesses will be turning to enterprise technology to smooth out the process of getting employees back to the workplace in the wake of the coronavirus pandemic, according to a report by Forrester Research. Technology leaders say safety will be a top priority. The information-technology research firm’s report lays out an early-stage road map for IT executives preparing to reopen corporate offices—a process that will vary by industry, but for most businesses will involve multiple stages. Chief information officers and their teams will likely be in the first wave of employees returning to the job site, said Andrew Hewitt, a Forrester analyst serving infrastructure and operations professionals. He said their initial task will be to develop a strategy for keeping employee tech tools—including PCs, mobile devices, monitors, keyboards and mice—germ-free without damaging them. “IT teams will need to have a staging area that’s outside of the front door of the office where employees can bring their home technology in and sanitize it,” Mr. Hewitt said.


Five Attributes of a Great DevOps Platform

DevOps Platform
Culture plays a significant role in establishing the guidelines while embracing DevOps in any organization. Through DevOps culture, companies seek to bring dev and ops teams into harmony to promote collaboration, automation, process improvements, continuous iterative development and deployment methodologies. But above everything else, a sound DevOps culture fundamentally solves one of IT’s biggest people problems: bridging the gap between dev and ops teams to get them to stop working in silos and have common goals. According to Gartner estimation, DevOps efforts fail 90% of the time when infrastructure and operations teams try to drive a DevOps initiative without nurturing a cultural shift in the first place. It is not just about the efficient tools or experts working; it is about the behavioral modifications and mentality necessary to effect cultural change. Hence, it is important for the firms to consider the culture of the company before selecting its tool as a potential DevOps tool for their development.


Use tokens for microservices authentication and authorization


STS enables clients to obtain the credentials they need to access multiple services that live across distributed environments. It issues digital security tokens that stay with users from the beginning of their session and continuously validate their permission for each service they call. An STS can also reissue, exchange and cancel security tokens as needed. The STS must connect with an enterprise user directory that contains all the details about user roles and responsibilities. This directory, and any connection made to it, should be properly secured as well, otherwise users could elevate their permissions just by editing policies on their own. Consider segmenting user access policies based on roles and activities. For instance, identify the individuals who have administrative capabilities. Or, you might limit a developer's access permissions to only include the services they are supposed to work on. ... Not all microservices permission and security checks are based around a human user.



Quote for the day:


"I'm not crazy about reality, but it's still the only place to get a decent meal." -- Groucho Marx


Daily Tech Digest - April 23, 2020

Indian IT desperately needed a new business model and coronavirus gave it one

remote-working-jeonghwaryu0.jpg
Some IT companies have implemented "employee productivity trackers like webcam-based movement capture, hourly timesheet entry, tracking of keyboards, and so on, to ensure employees are working at home," Yugal Joshi, vice-president at Texas-based consultancy Everest Group, told Quartz. "This indicates a deep-rooted malaise in Indian IT/ITes industry where the senior management generally mistrusts people," he added. Two, unlike the retail or manufacturing sectors that cannot operate with current social distancing norms, the top-tier Indian IT companies and their mid-sized brethren are responsible for keeping the lights on for a large collection global companies -- some of whom are depended on people every second of the day. This includes banks, utility companies, retailers, and, of course, pharmaceuticals. With the ongoing coronavirus outbreak, all of these industries are now being serviced from the apartments and houses of India's IT workforce, which as you can imagine, is a supremely difficult and exasperating task for everyone involved. Most of IT's clients have ironclad regulatory and privacy riders that have needed to be tweaked considerably in light of coronavirus.



How a basic cross-training program can ease disruptions on the IT team

If the coronavirus hasn't disrupted your business operations yet, there's a good chance it will soon. This first wave of illness will not be the last time the coronavirus disrupts daily business operations. First companies had to adjust to remote work for all employees. The next challenge may be filling in for colleagues who are out sick or caring for family members or friends who are ill. A cross-training program can make this transition go smoothly. Sam Maley, an IT operations manager at Bailey & Associates, an IT consultancy, said cross-training can minimize disruptions and reduce stress levels due to absenteeism. "Cross-training programs are designed to build versatility and skill overlaps in your team members," he said. Jeff Fleischman, CMO at the consulting firm Altimetrik, said cross-training needs to be part of business continuity plans. "To receive buy-in from top management, quantify the impact disruption has on the business such as revenue loss, reputational risk, defaulting on contractual obligations, and failing to meet regulatory requirements, and then explain how cross-training would eliminate these risks," Fleischman said.


Kubernetes vs. VMware: Drive the choice with IT architecture


The choice to run either containers in VMs vs. VMs in containers is an architectural design decision. This is because there's a line of thought that containers are the ideal abstraction for multi-cloud application delivery. Though VMware assures admins containers and VMs are the same in vSphere, it's difficult to draw a similar comparison for Kubernetes and VMs. Kubernetes is an orchestration product that admins use primarily for containers. In theory, Kubernetes could manage compute resources other than containers. However, a container as the primary abstraction layer means that traditional VM management tools don't map directly. Though networking can help solve this issue, KubeVirt could be the answer. KubeVirt uses Kubernetes network architecture and plugins rather than hypervisor abstractions, such as vSwitches, to manage networking. As a result, products must switch to network management based on Kubernetes namespaces. That's not necessarily a bad thing; it's just an overall change in operations mode from a VM-centric operating model to a container-centric operating model.



Researchers Release Open Source Counterfactual Machine Learning Library

Three Counterfactuals for Loan Application Scenario
Exactly what machine learning counterfactuals are, and the reasons why they are important, are best explained by example. Suppose a loan company has a trained ML model that is used to approve or decline customers' loan applications. The predictor variables (often called features in ML terminology) are things like annual income, debt, sex, savings, and so on. A customer submits a loan application. Their income is $45,000 with debt = $11,000 and their age is 29 and their savings is $6,000. The application is declined. A counterfactual is change to one or more predictor values that results in the opposite result. For example, one possible counterfactual could be stated in words as, "If your income was increased to $60,000 then your application would have been approved." In general, there will be many possible counterfactuals for a given ML model and set of inputs. Two other counterfactuals might be, "If your income was increased by $50,000 and debt was decreased to $9,000 then your application would have been approved" and, "If your income was increased to $48,000 and your age was changed to 36 then your application would have been approved." Figure 1 illustrates three such counterfactuals for a loan scenario.


What is value stream mapping? A lean technique for improving business processes

What is value stream mapping? A lean technique for improving business processes
Before you can start building a value stream map, you need to objectively evaluate your organization’s business processes, products and systems. Start by talking to leadership, department heads and other key stakeholders who can give you more insight into what can be improved. You’ll need to get hands-on experience with the process, product or system yourself and have other employees walk you through their part. It’s important to collect as much data as possible — for example, any inefficiencies in the process, how many workers are involved, what resources are used and any downtime. Any potentially relevant or noteworthy data is helpful in fleshing out your final VSM flow chart and achieving insights into what can be refined or improved. You’ll then create two separate VSM flow charts — a current state value stream map and a future state value stream map. Your current state VSM will be used to establish how the process currently runs and functions in the business. This is where you will demonstrate issues, significant findings and establish key requirements. The future state VSM, on the other hand, focuses on what your process will look like once your organization has completed all of the necessary improvements.


Ethernet consortium announces completion of 800GbE spec 

Network Networking Ethernet
Based on many of the technologies used in the current top-end 400 Gigabit Ethernet protocol, the new spec is formally known as 800GBASE-R. The consortium that designed it (then known as the 25 Gigabit Ethernet Consortium) was also instrumental in developing the 25, 50, and 100 Gigabit Ethernet protocols and includes Broadcom, Cisco, Google, and Microsoft among its members. The 800GbE spec adds new media access control (MAC) and physical coding sublayer (PCS) methods, which tweaks these functions to distribute data across eight physical lanes running at a native 106.25Gbps. (A lane can be a copper twisted pair or in optical cables, a strand of fiber or a wavelength.) The 800GBASE-R specification is built on two 400 GbE 2xClause PCSs to create a single MAC which operates at a combined 800Gbps. And while the focus is on eight 106.25G lanes, it's not locked in. It is possible to run 16 lanes at half the speed, or 53.125Gbps. The new standard offers half the latency of 400G Ethernet specification, but the new spec also cuts the forward error correction (FEC) overhead on networks running at 50 Gbps, 100 Gbps, and 200 Gbps by half, thus reducing the packet-processing load on the NIC.


Application performance for remote workers becomes primary network issue for businesses


In addition to the top-line finding of dealing with complexity and performance, the study also highlighted that cost had become less of an issue for respondents, who also cited significant investment in automation, security, cloud connectivity and the potential of 5G. Drilling deeper into the pressing issues for firms, Aryaka found that as the number of remote workers increases across the globe, productivity and remote application performance have become more important for organisations across Europe, the Middle East and Africa (EMEA). Some 45% of UK businesses noted that slow application performance led to a poor user experience for remote and mobile users, and that it was a significant issue faced by IT and support teams. Accessing and integrating cloud and software-as-a-service (SaaS) applications was one of the most pressing issues for UK IT departments, cited by 39%.


Ransomware is now the biggest online menace you need to worry about - here's why


One of the reasons why ransomware attacks have risen so much is because cyber criminals are increasingly viewing it as the simplest and quickest means of making money from compromised networks. With ransomware, attackers can lockdown an organisation's entire network and demand a bitcoin payment in exchange for the decryption key. Ransomware attacks are often successful because organisations opt to pay the ransom demand, viewing it as the quickest and easiest way to restore functionality to the network, despite authorities warning never to give into the demand of extortionists. These ransomware demands commonly reach six-figure sums and, because the transfer is made in bitcoin, it's relatively simple for the criminals to launder it without it being traced back to them. "The 'beauty' of the ransomware model is you only need to write the ransomware once and its potential to infect is only limited by its reach, which with the internet is unlimited," Ed Williams, EMEA director of SpiderLabs, the research division at Trustwave, told ZDNet.


Remote business continuity techniques to implement now


This is not just an issue when facing a pandemic. If your business continuity plan addresses only short-term disruptions, such as those that last less than a month, it may not be prepared for an extended outage. Your technology disaster recovery plan may need to be activated, assuming outages occur due to insufficient IT staff available or technology disruptions that occur due to a shortage of vendor personnel. Fortunately, many data centers are designed to operate without human intervention or with remote access to system administration functions. Technology vendors frequently use managed IT resources such as cloud-based systems to support their service offerings. This reduces the likelihood of outages as long as the managed service providers are able to keep their systems operational. As many organizations use remotely hosted applications, users can keep using those systems, so long as their vendors are able to keep their operations working. The real challenge for organizations that have mostly locally hosted systems and databases is to remotely manage those assets.


New Enterprise Graph Framework for Data Scientists Leverages Machine Learning

The new Neo4j for Graph Data Science framework is designed to enable data scientists to operationalize better analytics and machine learning models that infer behavior based on connected data and network structures Frame described. The framework, she said in a statement announcing the product release, is intended to provide the most expeditious way to generate better predictions. "A common misconception in data science is that more data increases accuracy and reduces false positives," she explained. "In reality, many data science models overlook the most predictive elements within data -- the connections and structures that lie within. Neo4j for Graph Data Science was conceived for this purpose -- to improve the predictive accuracy of machine learning, or answer previously unanswerable analytics questions, using the relationships inherent within existing data." 



Quote for the day:


"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it." -- Warren Bennis


Daily Tech Digest - April 22, 2020

Cisco integrates SD-WAN connectivity with Google Cloud

sd-wan
The Cisco/Google platform is important because software- and infrastructure-as-a-service (SaaS and IaaS) offerings have been driving SD-WAN implementations in the past year, experts say. “One of the key drivers of SD-WAN has been the increasing consumption of cloud services in the enterprise, across both IaaS and SaaS applications,” said Rohit Mehra, vice president, network infrastructure at IDC. “With some of the largest public cloud providers playing an increasing role in how these enterprise apps are consumed and delivered, and bringing their vast global networks to bear, they will increasingly have a role to play with how WANs are architected going forward.” For enterprises, one of the key takeaways from this announcement is that “SD-WANs will now be able to play a better functional role in the delivery of cloud services such as IaaS and SaaS, and likewise, the large public-cloud purveyors will benefit from providing a stronger value proposition towards multi-cloud deployments,” Mehra said. "Secondly, enterprises will benefit in terms of extending policy and governance beyond applications to other attributes such as locations/geo and multiple clouds.”



The new normal: A step-by-step guide for the enterprise

The new normal: A step-by-step guide for the enterprise
From a business perspective, we need to identify and understand the negative effects that occurred during the lockdown. What additional damage will likely occur in the short and long terms? This can range from relatively minor problems, such as a slowdown of some customer deliveries or lack of materials for manufacturing, to a complete shutdown of some operations due to on-premises systems that could not be maintained or fixed during the lockdown. You need to assign dollar amounts to each issue. Keep in mind that some of these will be hard costs, meaning sales and billing. Others will be soft costs, such as reputation. What points hurt the business the most? We need this information to prioritize triage. For most enterprises, this step will immediately identify the need to migrate some assets to cloud. The migration will typically target existing on-premises systems that managed to limp through the crisis. Based on historical migration data, the most common move will involve a “lift and shift” of resources, such as storage and compute, to a public cloud provider. Most enterprises will opt to refactor the applications at a later date; a few will refactor as the applications migrate.


Here are six tech roles companies want to fill now, despite the coronavirus lockdown


"The fact that recruitment is still continuing with relative strength in IT is perhaps unsurprising due to the on-going need across most sectors to conduct operations remotely," said Ann Swain, CEO of APSCo. John Gaughan, managing director of technology recruitment firm Finlay James, said he has a number of clients who are hiring and using remote on-boarding when filling SaaS tech sales roles and technology leadership positions. Recruiters are switching from in-person interview to video meetings with candidates, and in some cases, with everyone working from home, it may be some time before new recruits actually meet the people they are working with. The APSCo report also noted that recruitment for marketing has also held up surprisingly well, which it said is probably down to businesses ramping up their digital marketing and communications activities. There has also been an increase in roles involving employee engagement. "With many teams now working from home, the challenge of keeping remote employees engaged and operating as a cohesive unit has never been greater," the report said.


Contactless Payments: Healthy COVID-19 Defense


From a fraud-fighting standpoint, compared with swiping a card and signing a paper receipt, contactless is much more secure. And while some call these capabilities "tap and go," in reality, there's no contact required: You just have to wave your card or compatible smartphone close to the card reader until it beeps. Cards with this capability began to be rolled out in the U.K. in 2008, and the vast majority of payment terminals in stores now work with them. Other systems that don't get refreshed very often - for example, inside buses - have been slowly catching up. Here in the Scottish city of Dundee, last year most buses finally got upgraded with the ability to accept contactless payments. Many newer smartphones also have contactless capability via Apple Pay, Android Pay or Samsung Pay. Just load a payment card and use your smartphone to pay without touching anything, up to certain amounts. As a bonus, the smartphone-based approaches add additional layers of security, such as needing to use your fingerprint or face to unlock the contactless payment capability.


Remote Agile (Part 4): Anti-Patterns

remote agile anti-patterns
Hybrid events create two classes of teammates — remote and co-located — where the co-located folks are calling the shots. Beware of the distance bias — when out of sight means out of mind — thus avoiding the creation of a privileged subclass of teammates: “Distance biases have become all too common in today’s globalized world. They emerge in meetings when folks in the room fail to gather input from their remote colleagues, who may be dialing in on a conference line.” To avoid this scenario, make sure that once a single participant joins remotely, all other participants “dial in,” too, to level the playing field. Every communication feels like a (formal) meeting. ... Instead, put trust in people, uphold the prime directive, and be surprised what capable, self-organizing people can achieve once you get out of their way. Trust won’t be built by surveilling and micro-managing team members. Therefore, don’t go rogue; the prime directive rules more than ever in a remote agile setup. Trust in people and do not spy on them — no matter how tempting it might be. Read more about the damaging effect of a downward spiraling trust dynamic from Esther Derby.


COVID-19 & The Digital Imperative


In a recent interview, John Chambers, former Cisco CEO and now Venture Capitalist, said the pandemic will force many “companies to use this moment to make the transition to digital. Things will get worse before they get better— that is the realistic optimist in me speaking,” said Chambers, who has predicted up to 40% of the Fortune 500 and 70% of startups will no longer be around in a decade if they don’t make the digital transition. The disruptions brought about by the pandemic can be expected to accelerate the shift to digital that has already been underway. It is not just that organizations the world over have radically altered their work environments to accommodate work from home and technologies such as video conferencing and remote networking on a massive scale. It is also that the consequences of the pandemic are likely creating digital disruption opportunities and imperatives across the economy, in industries as diverse as food and beverage, hospitality, real estate, travel, and government.


How microsegmentation architectures differ

micro segmentation security lock 2400x1600
It's important to remember that microsegmentation is not just a data center-oriented technology. "Many security incidents start on end-user workstations, because employees click on phishing links or their systems become compromised by other means," Cross says. From that initial point of infection, attackers can spread throughout an organization's network. "A microsegmentation platform should be able to enforce policies in the data center, on cloud workloads, and on end-user workstations from a single console," he explains. "It should also be able to stop attacks from spreading in any of these environments." As with many emerging technologies, vendors are approaching microsegmentation from various directions. Three traditional microsegmentation types are host-agent segmentation, hypervisor segmentation and network segmentation. ... This microsegmentation type relies on agents positioned in the endpoints. All data flows are visible and relayed to a central manager, an approach that can help reduce the pain of discovering challenging protocols or encrypted traffic.


Google wants to make it easier to analyse health data in the cloud


Dr John Halamka, president of Mayo Clinic Platform, said: "We're in a time where technology needs to work fast, securely, and most importantly in a way that furthers our dedication to our patients. Google Cloud's Healthcare API accelerates data liquidity among stakeholders, and in-return, will help us better serve our patients." The issue of interoperability remains a tricky subject within healthcare. Battles over data formats and ownership stymies efforts to join up healthcare systems and make patient data available to healthcare professionals whenever and wherever they need it. In the US, inroads have been made recently through the passing of rules by Centers for Medicare and Medicaid Services (CMS) and the National Coordinator for Health Information Technology (ONC) to make it easier for healthcare organisations to exchange patient data, and for patients to access their own information. Google said its Cloud Healthcare API was designed to scale and support interoperability and patient access. It added that the COVID-19 pandemic had made the need for increased data interoperability more important than ever.


How developer teams went remote overnight

How developer teams went remote overnight
Remote work isn’t new for communications API specialist Twilio, but the pandemic has caused a massive shift. Prior to the coronavirus outbreak, CEO Geoff Lawson told TechCrunch that around 10 percent of the company worked remotely. “For a company like us to go from partially virtual to fully virtual in a short period of time, it’s not without its hiccups, but it has worked pretty well,” he said. Some of that 10 percent of remote workers included the team of Marcos Placona, manager for developer evangelism at Twilio. “My team has always worked on a distributed basis with direct reports in the US, UK, and across Europe,” Placona told InfoWorld. The various time zones involved make it “tough to work this way,” he admits, “but we have regular check-ins with the team and individuals with weekly one-to-ones.” Developer evangelists at Twilio still contribute code and have to track contributions, alongside writing documentation and filtering through reams of customer feedback. During the pandemic this team has shifted to holding daily remote stand-ups.


A Tale of 3 Breaches: Incident Response Challenges

A Tale of 3 Breaches: Incident Response Challenges
Three recently disclosed health data security incidents - including the discovery of a large email hack that happened nearly a year ago - serve as reminders of the ongoing incident response challenges facing healthcare organizations. A 2019 email hacking incident that affected 112,000 individuals was disclosed last week by Dearborn, Michigan-based Beaumont Health. Also recently reported were: a February ransomware attack on Wilmington, Del.-based substance abuse treatment provider Brandywine Counseling and Community Services that affected clinical records of an undisclosed number of patients, and a phishing scam impacting more than 27,000 patients and employees of Wisconsin-based Advocate Aurora Health. The COVID-19 crisis is likely to make it even more difficult for healthcare organizations to respond to security incidents, some observers say. "As long as COVID-19 drives IT activities in supporting remote workers and setting up patient triage tents with access to technology infrastructure, IT may have difficulty monitoring network activity for anomalous events unless a security operations center is in place to monitor around the clock, along with centralized log event management that can automate detection of and alerting on activities of concern," notes Keith Fricke



Quote for the day:


"Many men may see the King in a Kid but it takes a true leader to nurture it" -- Bernard Kelvin Clive


Daily Tech Digest April 21, 2020

Stay Ahead of the 5G and DevOps Race with Continuous Network Monitoring

5G and DevOps - Continuous Network Monitoring
Automobiles aside, another industry that benefits from being proactive rather than reactive is telecommunications. Not only does the telecoms world requires routine checks and maintenance, but it also needs to identify problems before they cause larger issues or disruptions. Networks are evolving rapidly and this will continue as 5G deployments expand; as will the need for regularly scheduled maintenance and examinations. DevOps–a set of procedures that automates between software development (Dev) and IT operations (Ops) along with continuous delivery (CD)–allows for a level of agility that enables new features and services to be deployed within weeks or days. There are four stages of establishing these services–design, deploy, test and operate–all of which demand a constant pace and network monitoring. To maximize DevOps and CD, including the speed benefits that come with both, predictive network monitoring (PNM) is vital.


Deploying Edge Cloud Solutions Without Sacrificing Security  

Deploying Edge Cloud Solutions Without Sacrificing Security
First, let's think about the structure of edge cloud systems. In most implementations, edges are within organizations' computing boundaries, and so they will be protected by a wide variety of tools that focus on perimeter scanning and intrusion detection. However, that's not quite the whole story: in most systems, there will also be a tunnel between the edge straight to cloud storage. Sending data from the edge to the cloud in a secure way is fairly straightforward, because organizations will control the infrastructure that is used to encrypt and verify it. The problem arises when the cloud needs to send data back to the edge for processing. The challenge here is to ensure that this data is authenticated and verified, and is therefore safe to enter into an organizations' internal systems. First, and most obviously, edge cloud systems fragment data. Having each device connected directly to cloud services might incur a performance loss, but at least this data is centralized, and can be covered by a single cloud security policy. Because edge cloud servers – almost by definition – need to be connected to many different devices, they represent a nightmare when it comes to securing these same connections.


DDoS in the Time of COVID-19: Attacks and Raids


Unfortunately, or fortunately, cyber security is an essential business. As a result, those working in the field are not getting to experience any downtime during a quarantine. Many of us have been working around the clock, fighting off waves of attacks and helping other essential businesses adjust to a remote work force as the global environments change. Along the way we have learned a few things about how a modern society deals with a pandemic. Obviously, a global Shelter-in-Place resulted in an unanticipated surge in traffic. As lockdowns began in China and worked their way west, we began to see massive spikes in streaming and gaming services. These unanticipated surges in traffic required digital content providers to throttle or downgrade streaming services across Europe, to prevent networks from overloading.  The COVID-19 pandemic also highlights the importance of service availability during a global crisis. Due to the forced digitalization of the work force and a global Shelter-in-Place, the world became heavily dependent on a number of digital services during isolation. Degradation or an outage impacting these services during the pandemic could quickly spark speculation and/or panic.



Governing by data: Limits and opportunities

Healthcare is perhaps the most obvious area of public service for the adoption of data analysis, given that medical science is largely built on this. The UK government has been led by data and science in reacting to the coronavirus epidemic over recent weeks, making a celebrity out of the UK’s chief medical officer Chris Whitty. But politics can trump data analysis. David Nutt, professor of neuropsychopharmacology at Imperial College London, was sacked as the government’s chief advisor on drugs in 2009 after saying policy in this area was not based on evidence. Nutt’s research found that legal alcohol was more harmful to society than illegal drugs, although heroin was rated as having the greatest damage on individuals. “The logical conclusion is, if government drugs policy is about harms, alcohol should be the primary focus,” Nutt writes in his new book Drink? The new science of alcohol and your health. “But for political reasons, this evidence has been ignored.”


IT directors plan to protect cloud budgets and consolidate vendors during downturn


According to the survey, agile delivery and cloud cost optimization are the most important priorities for tech leaders at the moment. IT managers will be using these tools to respond more quickly to customer demands and increase fiscal discipline. Agile and DevOps practices will drive faster software releases with lower failure rates and quicker recovery from incidents. IT leaders also need to pay attention to internal customers as well. The report recommends that teams should move from reactive infrastructure management to proactive support of digital transformation efforts by working closely with business owners, developers, product managers, and tech partners. The financial crunch due to the coronavirus will motivate financial teams to track down redundant, unused, and underused cloud services and turn them off. IT managers also reported that they will analyze worloads and identify the right pricing models—on-demand, spot, or reserved—to maximize savings. The survey also found that the gap between public cloud platform providers is closing with Google Cloud, Amazon Web Services, and Microsoft Azure each getting an equal share of votes as a preferred cloud provider. Tech leaders are looking for providers that can deliver on business needs


The Bootstrap 4 Grid Deconstructed

While upgrading my skillset and implementing an Angular based website, I again looked at the Bootstrap Grid system and decided to deep-dive into it and see what makes it work. I'll be using my original article as a kind of template for the structure of this article and will sometimes reference it for things explained there. I will also assume a basic knowledge of HTML and CSS. That you know what a <div>, <span>, etc. are..., that you know about CSS inheritance rules, ... I also assume you have read the article about the Bootstrap 3 grid system so you are familiar with responsive breakpoints and the like. ... The Grid: It's Still All About Rows and Columns. Nothing has changed here: we still need to define a container with rows which in turn contains columns. However, where in the Bootstrap 3 grid you had to always specify the width of your columns and make them add up to a total of 12, this is no longer true for the Bootstrap 4 grid. The Bootstrap 4 grid defines a simple col class which allows you to evenly spread your columns over the width of your page while taking up as much space as necessary for the content to match the column.


USB-C power for laptops is still complicated - and here's why

USB cable with magnetic interchangeable heads
The problem is that while USB-C can support any and all of those, what actually works is down to the capabilities of the port and of the cable itself (more specifically, the control chips at either end of the cable). Some laptops have one USB-C port that supports the PD (Power Delivery) standard and one that doesn't, because that way you can use a cheaper controller chip and only have to route the power down one path on the motherboard. Different protocols have different licencing requirements, so not every cable supports Thunderbolt. And you need specific controller chips in the cable to support PD. That's why the UNO interchangeable cable we looked at recently didn't support PD, making it an almost, but not quite, universal cable. The £46/$55 Infinity Cable (also from Chargeasap) has some nice tweaks: a cord wrap; a smaller, less bright LED on the cable so you know when power is flowing but you don't get dazzled by your phone cable at night; and the 15-year warranty that presumably inspired the name. But the big change is that it supports PD up to 100W. The Infinity cable has USB-C on one end, with an optional ($5) USB-A adapter for when you need to use an older port; the other end is a magnet with interchangeable connectors for USB-C, Micro-USB and Lightning. The magnets are strong -- get the tip close to the cable and it snaps on securely, but if you yank on the cable the tip will come off before you pull your device off the table.


The Internet Only Works During A Pandemic Because We Killed Net Neutrality

In fact, networks in China and Italy, like here in the States, have (with a few exceptions) held up reasonably well under the massive load of telecommuting and home learning. Not because of net neutrality policy, but because network engineers are generally good at their jobs. While there have been some network problems, they're usually of the "last mile" variety in both the EU and US. As in, your ISP never upgraded that "last mile" to your house, so you're still stuck on a DSL line from around 2007 that struggles to handle Zoom teleconferencing particularly well. But most core networks around the world have held up rather admirably. The claim that the EU was suffering some kind of exceptional congestion problems appears to have originated among some EU regulators who simply urged Netflix to reduce bandwidth consumption by 25% to pre-emptively help lighten the load. There was no supporting public evidence provided of actual harm. The move was precautionary.


How to overcome application modernisation barriers


“We’re talking about IT estates that have grown up over the past 30 to 40 years, and you find that many of these organisations have not invested in technology over time,” he says, adding that a lack of integrations between these applications is a major barrier to building agile, modern application portfolios. Like Mendix’s Ford, Fairclough recommends modernisation projects are divided into “prioritised chunks”, which he says enables IT teams to tackle the most important things first.  “Maybe there are some things that you don't even need to tackle, so actually you segment and decide that we can run those IT systems over there for another few years and then just retire them,” he says.  Describing a challenging modernisation project he worked on, Fairclough says the amount of work required to complete the project had been “totally underestimated”. He says the project involved an IT estate of more than 500 applications, which meant the customer did not understand how everything was connected. As a consequence, project costs were pushed up “exponentially”.


Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice

Failover Conf Q&A on Building Reliable Systems: People, Process, and Practice
The biggest challenge associated with the topic of reliability is knowing where to invest your time and energies. We’re never ‘done’ making a system reliable, so how do we know what components are most critical? Where will we get the highest ROI? Furthermore, how do we decide that a system is reliable enough? To answer that last question, set recovery time and recovery point objectives (RTOs and RPOs) and let yourself be guided by them. Based on those metrics, decide where you should be investing your time. To decide where to start improving the overall reliability of your system, you need to know how all of the components interact, and identify the most critical components and bottlenecks. You can spend all of your time making a database reliable, but that won’t matter if it sits behind a heavily used but unreliable caching layer. Dependency graphs are great for visualising how the components of your service fit together and will allow you to identify the places where you will reap the biggest reliability rewards. The challenge here is that dependency graphs get stale ridiculously quickly unless they are automated.



Quote for the day:


"When you can't make them see the light, make them feel the heat." - Ronald Reagan