Daily Tech Digest - April 28, 2020

WFH model disrupting network security business practices

WFH model disrupting network security business practices, says study image
“Social distancing measures that call for employees to work from home when possible have dramatically changed patterns of connection to enterprise networks,” said Rodney Joffe, chairman of NISC, and senior vice-president and fellow at Neustar. “More than 90% of an organisation’s employees typically connect to the network locally, with a slim minority relying on remote connectivity via a VPN, but that dynamic has flipped. “The dramatic increase in VPN use has led to frequent connectivity issues, and — especially considering the disruption to usual security practices — it also creates significant risk, as it multiplies the potential impact of a distributed denial-of-service (DDoS) attack. VPNs are an easy vector for a DDoS attack.” An increase in size of volumetric attacks on networks has been detected, with Neustar recently mitigating an attack measured at 1.17 terabytes that required unique and diverse tactics in order to successfully fend it off. “In times like these,” continued Joffe, “an always-on managed DDoS protection service is critical.


Third-party compliance risk could become a bigger problem

“Remote working has been hastily adopted by suppliers to keep their business running, so it’s unlikely every organization or employee is following best practices,” said Vidhya Balasubramanian, managing vice president in the Gartner Legal and Compliance practice. “Legal and compliance leaders are concerned about the new risks this highly disruptive environment has created for their organizations.” Bribery and corruption, privacy, fraud, and ethical conduct were all noted as the most-increased third-party risks (10% of respondents for each) for a signification number of respondents. “Legal and compliance leaders need to act now to mitigate third-party risk while still enabling their supply chain partners to flex to the current pressures on the system,” said Ms. Balasubramanian. “This will likely mean managing the contractual risks and opportunities of current relationships, mitigating emerging issues, and streamlining due diligence for new third-parties. ...”


Image: lassedesignen - stock.adobe.com
To meet head-on the scale of challenge organizations face to put that response in place fast and at scale, there’s now an unanswerable case for standing up virtual assistants. They can help achieve two vital tasks. The first is to provide automated answers to customers’ basic questions. That relieves human staff so they can focus on more complex and higher priority issues. That’s borne out by research that Accenture’s done which shows that during a time of crisis most customers prefer to turn to contact centers to get answers to urgent and complex issues. And the other essential role virtual assistants can play is to give call center workers faster and better access to the information they need to better support customers. So, what should organizations focus on to get virtual assistants up and running as fast as possible? There are a couple of critical areas to focus on. The first is speed of implementation. That’s going to need the relevant infrastructure, management systems and processes all set up quickly. Next is making sure that virtual assistants are trained on accurate and relevant content.



How IoT changes Banks and Fintech companies

Advantages of IoT in Finance- Before studying as well as using the IoT in fintech and the banking domain, company managers need to recognize the benefits the innovation supplies. Right here are the main factors for IoT adoption in Fintech: Customized client service- Banking companies can use IoT to collect even more information about their clients. After gathering real-time understandings concerning customers’ requirements as well as the rate of interest, organizations can supply custom-made content and customized experience. Subsequently, companies get in touch with their target market in more ways as well as benefits. Boosted decision-making- IoT aids services get information for credit scores threat assessment. With D2D (device-to-device) interaction procedures as well as sensor execution, possession monitoring firms can obtain pertinent data across various other areas such as retail, farming, etc. 


A GIF Image Could Have Let Hackers Hijack Microsoft Teams at Your Firm

gif-attack-workflow
Unfortunately, As the threat researchers at CyberArk explain, a flaw in both the desktop and web browser editions of Microsoft Teams could have been exploited by malicious hackers to read users' messages, send messages pretending to be from users, create groups, and control Teams accounts in a variety of ways. In fact, a single .GIF image sent to a Microsoft Teams user could have been enough to hijack multiple business accounts, traversing through an organisation like a worm. Users wouldn't even have to share the dangerous .GIF to be impacted. All that would be needed was for other users to see the .GIF image via Microsoft Teams, each time stealing authentication tokens and dramatically increasing the attack's ability to spread through an organisation. ... Many businesses are currently struggling enough without the additional nightmare of cybercriminals stealing their sensitive corporate secrets and compromising their network. Fortunately, for the attack to succeed hackers would have to have already compromised a subdomain belonging to the targeted organisation, on which to host the malicious image.


7 Habits of Highly Effective (Remote) SOCs

We're doing everything we can to make our shift to a remote SOC seamless for the team. But we're also being super cognizant of the quality of our work output. We use a quality control (QC) standard, Acceptable Quality Limits (AQL), to tell us how many alerts and incidents we should review each day. We then randomly select a number (based on AQL) of alerts, investigations and incidents and review them using a check sheet. We send the results to the team using a Slack workflow. Reviewing the results with the team lets us know how we're doing. It lets us know how we can adjust and improve. And no, we never expect perfection. This one is a bit obvious but it's worth stating. Since we're no longer working alongside each other, effective communication is crucial. And working in an all-remote setup may mean more distractions for some folks, not less. We're emphasizing empathy and constantly listening to learn what these distractions are for the team and landed on the need to over-communicate.


CISOs: Quantifying cybersecurity for the board of directors

quantifying cybersecurity
CISOs must reconsider their communication approach and perspective prior to a board and/or C-Suite discussion. It’s crucial that they report cyber-risk in a language that the board and the rest of the C-Suite can comprehend. It can be quite frustrating to explain advanced malware or technical controls to an audience who is not savvy about the technical details of cybersecurity. From a board member’s perspective, cyber-risk posture is viewed as a set of risk items with corresponding business impact and associated expense. The board wants to know where the enterprise is on the cyber risk spectrum, where it should be, and, if there’s a gap, how it’s going to close it. CISOs should focus on shifting the conversation from cybersecurity to cyber risk and provide concise, quantitative responses to the board’s questions without the use of overly technical terms or concepts. ... A CISO’s plan needs to be converted into an easily digestible, high-level list of small steps or initiatives, each with corresponding time frames, required resources and a dollar cost. Furthermore, given that the board will expect the CISO to drive and execute a plan, he or she must quantify all the responsible constituents involved.


AI startup: We've removed humans from business negotiations

Pactum's AI-based negotiation tool starts the process by interviewing the customer, recording all the required information surrounding the negotiation, and determining the value for each possible tradeoff in the contract for the customer. Pactum's team then builds the negotiation flows. When conducting the chat-based negotiation, the system gets to know the partner or supplier. "Besides the best-practice negotiation strategies, the system uses what it learned and all the available information to strike a win-win deal," explains Korjus, adding that although the system can operate in a fully autonomous mode, it can also be configured to loop in a human, depending on the customer's needs. By improving the way that suppliers are managed without human involvement, companies should see financial benefits, he argues: "Fortune Global 2000 companies have immense long tails of suppliers that go unmanaged because there are so many of them." The idea for an AI-based business negotiation tool was conceived by Pactum's second co-founder, Martin Rand.


One Size Doesn’t Fit All for AI Regulation

Image: metamorworks - stock.adobe.com
As we accelerate faster into this brave new world, it is essential that the leaders of the various regulatory agencies have a strong conceptual understanding of both the AI methods, as well as the underlying ethical and societal implications of these emerging use-cases. Deep industry expertise will be required among regulators as they collaborate with companies and citizens to shape this inevitable future. The most fruitful approach toward AI regulation requires industry and federal government working groups to collaborate on use-case specific regulation. However, broader, international mandates will likely be less effective at this early juncture. Every country and most every industry are thinking about their AI use-cases strategically and more likely from a geopolitical perspective. We are entering a period in which the commanding heights of geopolitics will not be defined by nuclear proliferation, but AI proliferation. In these uncertain times, businesses are impacted by unprecedented, exogenous forces including the current COVID-19 global pandemic. It’s particularly in these moments that society needs technological innovation to progress forward.


To Microservices and Back Again - Why Segment Went Back to a Monolith

Noonan pointed out the limitations of a one-size-fits-all approach to their microservices. Because there was so much effort required just to add new services, the implementations were not customized. One auto-scaling rule was applied to all services, despite each having vastly different load and CPU resource needs. Also, a proper solution for true fault isolation would have been one microservice per queue per customer, but that would have required over 10,000 microservices. The decision in 2017 to move back to a monolith considered all the trade-offs, including being comfortable with losing the benefits of microservices. The resulting architecture, named Centrifuge, is able to handle billions of messages per day sent to dozens of public APIs. There is now a single code repository, and all destination workers use the same version of the shared library. The larger worker is better able to handle spikes in load. Adding new destinations no longer adds operational overhead, and deployments only take minutes. Most important for the business, they were able to start building new products again.



Quote for the day:


"The world is moved not only by the mighty shoves of the heroes, but also by the aggregate of the tiny pushes of each honest worker." -- Frank C. Ross


Daily Tech Digest - April, 27, 2020

Has ‘digital transformation’ become a meaningless buzzword?

Has ‘digital transformation’ become a meaningless buzzword? image
It isn’t easy to discuss digital transformation as a concept without taking the current coronavirus pandemic into account, and some companies may say that the process has been undertaken just to continue operating in the current climate. However, the process could be upset by a lack of focus or thought towards long-term objectives when the term ‘digital transformation’ is brought up, which may have contributed to its buzzword status. ... “Terms like digital transformation shouldn’t just be ‘terms’ – they should map out how a business can move from physical everything to digital everything, where it makes sense, from strategy to implementation to long-term goals. This is particularly vital in unprecedented times like these. “Those who weather the storm the best will be those who can adapt to remote working and dynamic supply and demand planning, both of which need a digital presence. It means carrying on, quickly, with every employee working from home, and also having a cloud copy of your physical operations, or a digital twin, to make sensible decisions remotely in an ever-changing situation.



The Two Worlds of Employment in the Age of Automation

Employment in the Age of Automation
The long and the short of it is that for developers, sysadmins, SREs and all the other people working in information technology, the recent economic downturn is but a blip in the daily news feed. For those manning a cash register, taking a ticket at the local cineplex, cutting hair or driving for a rideshare service, it’s a life-altering event. Or, to put it terms of Lang’s film, those of us who live in the clouds are doing well, and for those on the ground, it’s a different story. Now, consider this: What if all the people who are presently laid off from their jobs find out there are no jobs to go back to? What if the economic recovery is slower than anticipated or society gets so accustomed to doing without that consumption doesn’t resume? Is such a scenario possible? Yes. Is it probable? Dunno. Will those of us in IT who work remotely all over the planet, from the comfort of our wired office, suffer? I doubt it. But, as for the rest, what then? ... Those of us in DevOps have made valuable contributions to the world. We’ve done dramatically more good than harm. We’ve also been well compensated.



Connect people across the entire organization through communities in Microsoft Teams

Image of a remote worker community in Microsoft Teams.
With a global health crisis compelling so many of us to work remotely, it’s more important than ever for leaders and communications to connect people across teams and organizations. Last November at Ignite, we unveiled the new Yammer, with a beautiful new design that powers community, knowledge-sharing, and employee engagement. The new Yammer includes a fully interactive Yammer app called “Communities” that brings your communities and conversations directly into Microsoft Teams. Put simply, it’s Yammer—in Teams. Starting today, this app is available in the Microsoft app store. Here, I’ll go over how your team can use it for company-wide communication, knowledge-sharing, and employee engagement, as well as how to install it and where to find it. By offering the full Yammer experience right inside Teams, we want to help you keep everyone at your organization engaged, informed, and moving forward. Let’s get into it. ... Leaders can use live events in Yammer to broadcast company-wide, town hall–style meetings with video, interactive conversation, and Q&A sessions to share vision, drive culture, and engage employees.


UTPP - Another Unit Test Framework

UnitTest++ was based on these requirements and fulfills most of them. However, I found a problem: the implementation is not very tight with WAY too many objects and unfinished methods for my taste. Instead of choosing another framework, I decided to re-implement UnitTest++ and that's how UTPP (Unit Test Plus Plus) came into existence. It borrows the API from UnitTest++ but the implementation is all new. ... When performing a test, you need certain objects and values to be in a known state before the beginning of the test. This is called a fixture. In UTPP, any object with a default constructor can be used as a fixture. Your tests will be derived from that object and the state of the object is defined by the fixture constructor. ... Although there is no shortage of unit test frameworks, if you spend a bit of time with UTPP, you might begin to like it.


Are you asking enough from your design leaders?


Not that we’re saying design leadership should usurp the chief strategist’s role—only that design has a unique role to play in strategy. Lyft’s most recent app redesign, for example, introduced more than a few new tabs: it contributed to the company’s new strategic direction. The app had previously highlighted car rides. However, the company learned that its riders were interested in multiple forms of transportation. The redesign brought new options such as choosing a bus route, grabbing a scooter, or even renting a car into an equal view under the same app. The user and market insights gleaned through this redesign process helped fuel Lyft’s strategic shift from a provider of rides to a portal enabling people to move through cities in multimodal fashion. Design was not the only party contributing to this strategic shift, but as Katie M. Dill, vice president of design at Lyft, makes clear, “It’s not design versus the business, it’s about what we can do together.



Multi-Vendor Infrastructures Are Easier Than Ever to Manage

Image: Nuamfolio - stockadobe.com
Most infrastructure companies focus on just a few aspects of an average enterprise infrastructure. Thus, these vendors have found that they're better off cooperating with one another to streamline and nullify the challenges their customers might encounter when managing a multi-vendor environment. Technology partnerships between infrastructure vendors are now more common than ever before. These partnerships provide cross-vendor interoperability information, best-practice implementation guides and other aspects that administrators would find useful when working to integrate multi-vendor equipment into the overall IT infrastructure. This also includes improved cooperation when troubleshooting problems that require support from two or more vendors. Infrastructure companies have finally realized that "passing the buck" when troubleshooting in a multi-vendor environment is highly detrimental to their ongoing success. The management and control of infrastructure components used to be siloed architectures. Network vendors had their own management platforms as did server, OS and other infrastructure components.



AI Explainability: making the complex comprehensible


Achieving AI Explainability requires understanding and insights aligned to both the socio-economic and scientific-technical dimensions. Societies will probably progressively trust AI algorithms as their use becomes more widespread and as legal frameworks refine the allocation of liabilities. Of course, cultural differences greatly affect how countries and regulatory regions approach AI. In countries such as China, regulation is lax and the political system seemingly places little importance on the freedom of individuals; for example, China is implementing a social credit system, based on algorithms, which aims to provide a standardized assessment of the trustworthiness of its citizens. This context makes Ethical AI and Explainable AI, as we see it in Europe, less applicable. In the US, while the rights of individuals are more important, regulation is also lax, so the workability and benefits of AI solutions represent greater value than their explainability.


Ransomware gangs are changing targets again. That could make them even more of a threat


"Attackers are shifting to other industries, specifically finance, during this pandemic," Kellermann adds. And even if some ransomware gangs are shifting their targeting to avoid medical facilities as the world faces coronavirus, the healthcare sector doesn't operate in a bubble of its own. The supply chain requires manufacturers, logistics providers and more, which all provide products to hospitals – especially as companies switch tack and get involved in producing ventilators, protective personal equipment and other items that are in high demand right now. That could mean that even if ransomware attackers really are attempting to avoid hitting healthcare, so as not to disrupt the coronavirus fight, they could still do so inadvertently. "It's not just attacks on healthcare that could be problematic; there's device manufacturers, testing labs, logistics companies responsible for deliveries – and we've seen attacks on all of these in recent weeks," says Brett Callow, threat analyst at Emsisoft. It's also possible that ransomeware operations themselves will have to adapt their own processes and working behaviours to coronavirus, just like legitimate businesses.


The Post Pandemic Organization for the Future of Work
The disruption we are facing today is as profound as it is pervasive. Yet I deeply believe it also offers an increasingly fertile and robust landscape into which we can drive meaningful and sustained change for good. Our timing must be careful and the thinking behind it — combined with effective action at scale — both crisp and clear, albeit real challenges in our fast-changing times. There’s also no denying that how we’ve worked before is simply gone. Something much better than what we currently have must replace our current unwieldy situation for many of us: Weeks long slogs through endless video calls, tiring teleconferences at all hours, with our team chat windows scrolling mindlessly past our gaze. We can and must now create a much better design for our current working realities. Whether you will focus on remote work, more quarantine-friendly physical facilities, or a comprehensive rethink of the modern enterprise for being near 100% digital, we will have to go as deep as the core ideas that underpin work itself.


Moneyball Medicine: Data-Driven Healthcare Transformation

Artificial Intelligence
The COVID-19 outbreak offers some valuable lessons by presenting a potential playbook for the next time a pandemic threatens the U.S. and the world. The data that we are capturing now, from how many ventilators were needed at a hospital at the peak, to the true impact of social distancing on mortality, can be used to help scientists develop more accurate models in the future. ... Yet challenges to data and analytic adoption remain. Glorikian observes that “Only recently have patients been able to access their medical records through online patient portals. Physicians remain hesitant to rely on analytic models and AI when they are perceived to be black boxes.” Hospitals can be expected to standardize data definitions so regulators such as the CDC can access data more rapidly for monitoring public health emergencies. Epic’s Faulkner notes in the Becker Health article, “If people define the data differently, then you can’t aggregate it. And just collecting the data when it isn’t standardized doesn’t get you very far”. The classic data preparation challenge.




Quote for the day:


"In the end, we will remember not the words of our enemies, but the silence of our friends." -- Martin Luther King Jr.


Daily Tech Digest - April 26, 2020

Can computers become conscious?


AI-hard problems are hypothesized to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. As it stands, AI-hard problems cannot be solved with current computer technology alone. They still require human intervention, and probably always will. Following this trend, AI will not become self-aware. So, the doomsday conspiracy theorists are wrong. AI will not become the dominant form of intelligence on Earth, with computers and robots taking over the world. Still, there’s nothing wrong with taking a few precautionary measures to ensure that future superintelligent machines remain under human control. However, I don’t think a robot uprising is possible. Nonetheless, there are those who believe that machines have minds or soon will. This is why scientists have developed a number of experiments to test AI, to find out what the limits of artificial intelligence are.



Custom Response Caching Using NCache in ASP.NET Core

Response caching enables you to cache the server responses of a request so that the subsequent requests can be served from the cache. It is a type of caching in which you would typically specify cache-related headers in the HTTP responses to let the clients know to cache responses. You can take advantage of the cache control header to set browser caching policies in requests that originate from the clients as well as responses that come from the server. As an example, cache-control: max-age=90 implies that the server response is valid for a period of 90 seconds. Once this time period elapses, the web browser should request a new version of the data. The key benefits of response caching include reduced latency and network traffic, improved responsiveness, and hence improved performance. Proper usage of response caching can lower bandwidth requirements and improve the application’s performance. You can take advantage of response caching to cache items that are static and have minimal chance of being modified, such as CSS, JavaScript files, etc.


It's a great time to tackle core IT upgrades


There are hundreds of thousands of security patches out there, but Vulcan will tell you that a few of the important ones will eliminate many related security issues. A little work now goes a long way -- if you know what to do. With consumers and business buyers stuck at home, it is the e-commerce side of a business that is super important. During the important fourth-quarter holiday sales season, companies won't risk making any changes to their e-commerce systems. Now things are reversed, it is the main IT systems that can be upgraded and patched with less risk of downtime problems. But don't mess with the e-commerce systems. Vulcan's platform is designed to scale and to interface with all the standard IT tools. It makes heavy use of machine learning and also human intelligence -- IT experts that can analyze new security threats and solutions. And sometimes a patch isn't needed and a simple workaround will eliminate dozens of related issues, says Bar-Dayan. Vulcan's reports identify the top vulnerabilities and the detailed remediation steps necessary. It is a huge time-saver for cybersecurity teams.


Shadow Broker leaked NSA files point to unknown APT group


Juan Guerrero-Saade, a security researcher and adjunct professor at Johns Hopkins University’s School of Advanced International Studies, wasn’t convinced, arguing that misleading files make their way onto VirusTotal all the time. He realised that the file in question was a 15Mb memory dump of a McAfee installer. In short, it’s a red herring. Investigating godown.dllfurther, he found that the file was a drop from a larger multi-stage infection framework. The tools and techniques that the framework used indicated a unique cluster of activity. It pointed to an advanced persistent threat group that wasn’t publicly known until now. Although it’s difficult to directly attribute the attack to a specific actor, Guerrero-Saade noted that some of the resources in the files mention Farsi (Persian), which is native to countries including Iran. The name used in the root debug path, c:/khzer, apparently means ‘to survey or monitor’ according to friends of his that are acquainted with the language, and so he decided to call the attack group Nazar, after the heart-shaped amulet supposed to protect people against the evil eye in many countries across the middle east.


The true costs incurred by businesses for technology downtime

technology downtime
The research, conducted by Vanson Bourne, which surveyed 1,000 senior IT decision-makers and 2,000 end users at organizations with at least 1,500 employees across the U.S., the U.K., France, and Germany, shows that employees are losing an average of 28 minutes every time they have an IT-related problem. The report also shows that IT decision makers believe employees are experiencing approximately two IT issues per week, wasting nearly 50 hours a year. However, as only just over half of IT issues are being reported, the numbers are more likely to be nearly double that – close to 100 hours (two work weeks) a year. This has led to a vicious cycle of employees trying to fix IT problems on their own, leading to less engagement with the IT department, which doesn’t have visibility into how the technology is being consumed. There exists a major disconnect between IT departments and employees, with 84% of employees believing that their organizations should be doing more to improve the digital experience at work. However, a staggering 90% of IT leaders believe that workers are satisfied with technology in the workplace, highlighting the discrepancy between perception and reality of the digital employee experience.


Judges and lawyers learn Zoom rules in real time during coronavirus crisis


Ines Swaney, a certified Spanish interpreter, said her first experience with Zoom was a three-way conversation during a legal visit between an attorney in one city, the attorney's incarcerated client joining the conversation from jail in another city, and herself serving as an interpreter in a third city. One drawback with the Zoom platform is that it forces an interpreter to use consecutive interpreting instead of simultaneous interpreting, which is the preferred approach. Swaney said that online platforms also need to allow private conversations between an attorney and the judge, and among an attorney, client and interpreter who may need to speak privately for a brief period of time during a hearing. Tony Sirna, legal strategist and customer success manager at Verbit, said there are serious considerations the courts are working through, particularly ensuring due process with remote proceedings, technology interruptions, unauthorized recordings, exhibits, and the impact virtual appearances will have on defendants, for example.  Sirna said in addition to standardizing software and recording technology, courts need to agree on procedural best practices, such as how exhibits and stipulations will be handled remotely.


Text ‘bomb’ crashes iPhones, iPads, Macs and Apple Watches – what you need to know

Text 'bomb' crashes iPhones, iPads, Macs and Apple Watches - what you need to know
The problem appears to exists in how the latest shipping versions of Apple’s operating system handle a Unicode symbol representing specific characters written in Sindhi, an official language in part of Pakistan. The problem occurs most irritatingly when your device attempts to display a message notification. If you have configured your iPhone, for instance, to display a new message notification which includes a preview of the message, then iOS fails to properly render the characters and crashes with unpredictable results. You may find the only way to get around the problem is to completely reboot your device – but there is always the risk that you will receive a new boobytrapped notification. The problem can also manifest itself inside apps. For instance, some mischievous Twitter users have tweeted the offending characters causing other users to have their devices crash. Android users, meanwhile, are unaffected – and can watch the chaos with bemusement. Some of the earliest reports suggested that for the attack to work the Sindhi characters had to be used in conjunction with an Italian flag emoji.


What Is Agile Enterprise Architecture? Just Enough, Just in Time

Agile is based on the concept of “just in time.” You can see this in many of the agile practices, especially in DevOps. User stories are created when they are needed and not before, and releases happen when there is appropriate value in releasing, not before and not after. Additionally, each iteration has a commitment that is met on time by the EA team. EA is missing the answer to the question of “what exactly is getting delivered?” This is where we introduce the phrase “just enough, just in time” because stakeholders don’t just simply want it in time, they also want just enough of it — regardless of what it is. This is especially important when communicating with non-EA professionals. In the past, enterprise architects have focused on delivering all of the EA assets to stakeholders and demonstrating the technical wizardry required to build the actual architecture. ... Create a marketing-style campaign to focus on EA initiatives, gathering and describing only what is required to satisfy the goal of the campaign.


Safe shopping: Your best options for NFC and contactless payments


Near-Field Communications, or NFC, is a technology built-in to many modern families of mobile devices, such as the iPhone, the Samsung Galaxy, Google Pixel, and many other Android smartphones. NFC, introduced in 2002, allows contactless data transfer between mobile devices and can to emulate a credit card for payments at POS terminals in retail stores. NFC lets the user pass their smartphone device over a payment terminal at a retailer in order to complete the purchase, provided that a supported "e-Wallet" platform is used. Keep in mind, however, that NFC still requires you get relatively close to the payment terminal and the person running it, and may even require you physically interact with a keypad or virtual keypad/screen to initiate a transaction -- so wear gloves or have the employee initiate the transaction on your behalf, and if you have to touch the terminal, do not touch your face, and wash your hands immediately afterward. Be sure you maintain safe distances when using it, or shop where there is a plexiglass barrier between you and the retail employee.


Go as a Scripting Language

Go's growing adoption as a programming language that can be used to create high-performance networked and concurrent systems has been fueling developer interest in its use as a scripting language. While Go is not currently ready "out of the box" to be used as a replacement for Bash or Python, this can be done with a little effort. As Codelang's Elton Minetto explained, Go has quite some appeal to be used as a scripting language, including its power and simplicity, support for goroutines, and more. Google software engineer Eyal Posener adds more reasons to adopt Go as a scripting language, such as the availability of a rich set of libraries and the language terseness, which makes maintenance easier. ... Being able to use the same language for day-to-day tasks and less frequent scripting task would greatly improve efficiency. Go is also a strongly typed language, notes Cloudflare engineer Ignat Korchagin, which can help to make Go scripts more reliable and less prone to runtime failure due to such trivial errors as typos.



Quote for the day:


"A leader is one who sees more than others see and who sees farther than others see and who sees before others see." -- Leroy Eimes


Daily Tech Digest - April 25, 2020

pharming  >  faudulent website redirect
A pharming attack tries to redirect a website's traffic to a fake website controlled by the attacker, usually for the purpose of collecting sensitive information from victims or installing malware on their machines. Attackers tend to focus on creating look-alike ecommerce and digital banking websites to harvest credentials and payment card information. These attacks manipulate information on the victim’s machine or compromise the DNS server and rerouting traffic, the latter of which is much harder for users to defend against. Though they share similar goals, pharming uses a different method from phishing. “Pharming attacks are focused on manipulating a system, rather than tricking individuals into going to a dangerous website,” explains David Emm, principal security researcher at Kaspersky. “When either a phishing or pharming attack is completed by a criminal, they have the same driving factor to get victims onto a corrupt location, but the mechanisms in which this is undertaken are different.” Pharming attacks involve redirecting user requests by manipulating the Domain Name Service (DNS) protocol and rerouting the target from its intended IP address to one controlled by the hacker. This can be done in two ways.


Technology, Financial Inclusion, and Banking in Frontier Markets

The lack of local knowledge of emerging and frontier markets can make it exceptionally difficult to serve those with limited infrastructure in the right way. A strong understanding of local financial processes and more complex environments are vital to providing financial services in hard-to-reach territories. It also helps to build trust and relationships with key organizations in that region. Where the relationship becomes mutually reinforced is when financial inclusion increases and we get more data on people within the market. As we understand consumer behaviors and markets are better understood, more players are willing to serve them and we are able to reach more people with financial services. When the two complement each other well, we can make a real difference in improving access to these services. While 72% of founders say that diversity in their startup is extremely or very important, only 12% of startups are diversity leaders in practice.


Software Testing: The Comeback Kid of the 2020s

Software Testing
Ultimately, developers don’t have the time or desire to keep these tests current over the long term. Unit testing has been a best practice for more than 20 years, yet despite waves of unit test automation tools (including one created by Albert Savoia not long before he declared testing dead), unit testing remains a thorn in developers’ sides. Does that mean we give up the benefits of unit testing altogether? Not necessarily. In order to take on unit testing per se, testers would need to understand the developers’ code as well as write their own code. That’s not going to happen. But, you could have testers compensate for lost unit test coverage through resilient tests they can create and control. Professional testers recognize that designing and maintaining tests is their primary job and that they are ultimately evaluated by the success and effectiveness of the test suite. Let’s be honest, who’s more likely to keep tests current, the developers who are pressured to deliver more code faster, or the testers who are rewarded for finding major issues (or blamed for overlooking them)?


Checking AI bias is a job for the humans

Checking AI bias is a job for the humans
Machine learning models are only as smart as the datasets that feed them, and those datasets are limited by the people shaping them. This could lead, as one Guardian editorial laments, to machines making our same mistakes, just more quickly: “The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster?” Complicating matters further, our own errors and biases are, in turn, shaped by machine learning models. As Manjunath Bhat has written, “People consume facts in the form of data. However, data can be mutated, transformed, and altered—all in the name of making it easy to consume. We have no option but to live within the confines of a highly contextualized view of the world.” We’re not seeing data clearly, in other words. Our biases shape the models we feed into machine learning models that, in turn, shape the data available for us to consume and interpret.


Starbleed vulnerability: Attackers can gain control over FPGAs

Starbleed
Attackers can gain complete control over the chips and their functionalities via the vulnerability. Since the bug is integrated into the hardware, the security risk can only be removed by replacing the chips. The manufacturer of the FPGAs has been informed by the researchers and has already reacted. FPGA chips can be found in many safety-critical applications, from cloud data centers and mobile phone base stations to encrypted USB-sticks and industrial control systems. Their decisive advantage lies in their reprogrammability compared to conventional hardware chips with their fixed functionalities. This reprogrammability is possible because the basic components of FPGAs and their interconnections can be freely programmed. In contrast, conventional computer chips are hard-wired and, therefore, dedicated to a single purpose. The linchpin of FPGAs is the bitstream, a file that is used to program the FPGA. In order to protect it adequately against attacks, the bitstream is secured by encryption methods. Dr. Amir Moradi and Maik Ender from Horst Görtz Institute, in cooperation with Professor Christof Paar from the Max Planck Institute in Bochum, Germany, succeeded in decrypting this protected bitstream, gaining access to the file content and modifying it.


How To Secure 5G — And The Internet Of Things Too

Internet of Things connectivity
“From a cybersecurity standpoint, things haven’t really changed that much,” he said, “so, the challenges remain the same.” As he told PYMNTS, the key challenge is to make sure that the systems and devices are better than reasonably secure before they go on the 5G network in the first place. That challenge is intensifying as 4G gets ready to give way to 5G. Adding devices boosts vulnerability, he said. Each one of those devices represents a possible point of attack for hackers and fraudsters. There are hundreds of millions of devices now that can, conceivably, be compromised, in some way — and there will be billions of devices in the future. The challenges of cybersecurity, he said, are the same whether from the standpoint of a manufacturer building an Internet of Things (IoT) device or from a healthcare company that is building devices that will be used by providers or a telecom company building network equipment. “The key question,” Knudsen said, “is how do you build that system or device in a way that minimizes risk?”


Blockchain Revolutionizing Banking and Financial Markets


If a change is to be made in a particular block, it is not rewritten. Instead, a new block is created which contains the cryptographic hash of the previous block, the amended data, and the timestamp. Hence, it is a non-destructive way to track data changes over time. In addition, Blockchain is distributed over a large network of computers and is decentralized which reduces the tampering of data. Now, before a block is added to the Blockchain each person maintaining a ledger has to solve a special kind of math problem created by a cryptographic hash function. Whoever solves the hash first gets to add the ledger to the block chain. Blockchain can also be private, public and even hybrid private-public. Hence, Blockchain can literally revolutionize the way we access, verify and transact our data with one another. ... Blockchain has come up with a peer-to-peer effective solution for lenders and borrowers without any involvement of third parties. A Spanish Bank gave the first crypto-loan service in 2018.They are fast (takes less than 48 hours), have much cheaper operational costs, are more secure and transparent.


How to become a data scientist without getting a Ph.D

Doctor with Medical Healthcare Research Concept
There are data scientists at every company. Instituting a mentor program, for example, combined with a continuous learning curriculum can greatly improve data fluency across an organization. And this is no longer an option — it's an imperative. Data is king in business. Data science is a means by which you can use data to make business decisions. Without the basic data science skills, employees can't make these important decisions.  As your team becomes more comfortable with the language of data, they'll be more comfortable bringing data to bear on important business decisions. It will become clear that some team members are more comfortable using data skills than others are. Encourage the proficient ones to mentor others. Even at DataCamp, where data science is our business, some people don't work with data continuously. When they need help on a complex problem, they pair up with those who do.  It's all about shared tools, skills and responsibilities — they can dramatically improve communication and understanding between employees, which ultimately improves workplace culture.


Multi-Cloud Cost Optimization For The Enterprise

Multi-Cloud Cost Optimization
How much will the public cloud cost? You should begin your cloud cost management strategy by looking at the public cloud providers’ billing models—just like any other IT service, the public cloud can introduce unexpected charges. How much storage, CPU and memory do your applications require currently? Which cloud instances would meet those requirements? Then, it’s a question of estimating how much those applications would cost in the cloud and comparing these figures to how much it currently costs you to run them on-premise. If you plan to use multiple public cloud providers, integration and other factors can lead to unexpected fees—try and plan application deployments to see where you might be liable for extra costs.  Initially, it seems that most vendors offer similar packages and prices—when you examine them in detail however, perhaps one vendor has a much lower price for certain types of workloads. Understand your business requirements before committing to a cloud vendor, and avoid vendor lock-in.


5 ways to empower remote development teams


QA teams that have never tested remotely must surmount technical, process-oriented and cultural challenges. Issues include how to collaborate virtually, procure off-site resources and manage asynchronous work schedules. Adjustments to workplace culture can help just as much as -- if not more than -- new tools. Follow these best practices for remote QA work from Gerie Owen, an experienced test manager. For example, communicate more frequently with team members, with more detail and context than usual. Owen also offers advice for organizations that lack sufficient network capacity for remote QA resources. ... Many enterprises must make distributed Agile development work. Read how to manage distributed Agile development and its various challenges, as detailed by software architect and technical advisor Joydip Kanjilal. He outlines, for example, what practices a remote development team can adopt to fulfill the values and principles of Agile. To improve camaraderie, a team might host regular video conferences.



Quote for the day:


"There is no 'one' way to be a perfect leader, but there are a million ways to be a good one." -- Mark W. Boyer


Daily Tech Digest - April 24, 2020

Data: The Fabric of Developers’ Lives

Data fabric_developers
Storage-as-a-Service—we hardly knew about it. Thanks in large part to containers, which offer exceptional scalability, simplicity and high availability, the speed of application development has increased dramatically. Developers need to be able to quickly provision their own data, in just the right amounts, to match that velocity. And, like containers, that data needs to be portable. Provisioning quickly means no more going through storage administrators to get the services they need, which can be a cumbersome and time-consuming process. Solutions like Kubernetes’ on-demand clusters enable developers to procure the data they need when they need it. The abstraction layer provided by a data fabric can empower developers even further. They can write their own APIs, provision data services as needed and move that data between clouds with ease. This is particularly important when dealing with cloud providers that offer different services. Sometimes a developer may need a service that exists in one cloud but not another. It’s critical to have an underlying storage infrastructure that enables applications and their data to be transferred as needs require.


Remember when open source was fun?

When Daniel Stenberg set out to make currency exchange rates available to IRC users, he wasn’t trying to “do open source.” It was 1996 and the term “open source” hadn’t even been coined yet (that came in February 1998). No, he just wanted to build a little utility (“how hard can it be?”), so he started from an existing tool (httpget), made some adjustments, and released what would eventually become known as cURL, a way to transfer data using a variety of protocols. It wasn’t Stenberg’s full-time job, or even his part-time job. “It was completely a side thing,” he says in an interview. “I did it for fun.” Stenberg’s side project has lasted for over 20 years, attracted hundreds of contributors, and has a billion users. Yes, billion with a B. Some of those users contact him with urgent requests to fix this or that bug. Their bosses are angry and they need help RIGHT NOW. “They are getting paid to use my stuff that I do at home without getting paid,” Stenberg notes. Is he annoyed? No. “I do it because it’s fun, right? So I’ve always enjoyed it. And that’s why I still do it.”


New research by the data protection and management software supplier has found 5.8 million tonnes of carbon dioxide will be pumped into the atmosphere this year resulting from the use of storage systems to house and process dark data. Veritas derived the figure by mapping industry data on power consumption from data storage, industry data on emissions from datacentres and its own research. On average, 52% of all data stored by organisations worldwide is likely to be dark data, according to Veritas. With the amount of data growing from 33 zettabytes in 2018 to 175 zettabytes by 2025, there will be 91 zettabytes of dark data in five years’ time – over four times the volume of dark data today. Ravi Rajendran, vice-president and managing director for the Asia South region at Veritas Technologies, said that although companies are trying to reduce their carbon footprint, dark data is often neglected. And with dark data producing more carbon dioxide than 80 countries do individually, Rajendran called for organisations to start taking it seriously. 


How different generations approach remote work

Maybe it's more millennials that are really pushing the work from home, but if you would think it would be more of your generation. I say that I'm Gen X. Veronica and I both are, of course. But, you would think that it'd be the younger ones that would be all for working from home, to have that freedom. ... When I'm in an office, as you both know, I tend to be a bit of a chatterbox, so it's good for me to have that alone time to really lock things down. But it's different for people. But, Veronica, you and I would be able to speak on this for Gen X, at least, in the research that I saw, NRG found that most Gen X-ers enjoyed working from home because they were really comfortable, and they liked that independence. And they also liked being around their families, and having that quality time, and felt a little more relaxed. Would you say that's accurate? ... You can get up and take a break whenever, and reset your brain to shift tasks, or to find inspiration if you're stuck on something. I think if you can close the door or close your family off, it's OK. My kids are older now, but if they were little, it would be so hard to work from home now. I have an 11-year-old and a 15-year-old, so they can make their own lunch, and walk the dog, and be self-sufficient while I'm down here.



Netgear is ahead of the game with its WiFi 6 router portfolio and it is paying off as the company is seeing a surge in home network upgrades. The catch for Netgear is that its supply chain, sales channels and markets have all been upended by the COVID-19 pandemic. CEO Patrick Lo outlined the moving parts of Netgear's first quarter. We saw two distinct phenomena during the Covid-19 pandemic. Whenever a shelter in place lockdown was declared, business activities fell and demand for our SMB products dropped significantly. At the same time, consumers are quickly finding out that high performance WiFi at home is a necessity and are rushing to upgrade their home WiFi, driving upticks in our consumer WiFi and mobile hotspot sales. We also saw significant channel shift from physical retail channel purchases to online purchases which put strain on the logistics of some of our online sales partners. On an earnings conference call, it became clear that Netgear had a lot to navigate as it pulled its guidance due to COVID-19. The company reported a first quarter net loss of $4.17 million on revenue of $229.96 million, down from $249 million a year ago. On a non-GAAP basis, Netgear's earnings of 21 cents a share were a nickel better than estimates.


Researchers say deep learning will power 5G and 6G ‘cognitive radios’


For decades, amateur two-way radio operators have communicated across entire continents by choosing the right radio frequency at the right time of day, a luxury made possible by having relatively few users and devices sharing the airwaves. But as cellular radios multiply in both phones and Internet of Things devices, finding interference-free frequencies is becoming more difficult, so researchers are planning to use deep learning to create cognitive radios that instantly adjust their radio frequencies to achieve optimal performance. As explained by researchers with Northeastern University’s Institute for the Wireless Internet of Things, the increasing varieties and densities of cellular IoT devices are creating new challenges for wireless network optimization; a given swath of radio frequencies may be shared by a hundred small radios designed to operate in the same general area, each with individual signaling characteristics and variations in adjusting to changed conditions. The sheer number of devices reduces the efficacy of fixed mathematical models when predicting what spectrum fragments may be free at a given split second.


Outsourced DevOps brings benefits, and risks, to IT shops


When IT teams outsource DevOps planning to a third-party service provider, it only exacerbates existing planning issues. Another option is to hire a contract Scrum Master or product manager with DevOps experience to work with the in-house teams. Either way, proceed with an end game of knowledge transfer to build in-house planning expertise. Depending on the organization's attitude toward contractors, the addition of an outside contractor to work on planning can bring some cultural challenges. Some organizations treat contractors as valued members of the team, while others treat them as outsiders -- which makes it challenging to have a contractor in any subject matter expert position. Planning tools, however, are ripe for outsourcing. For example, if an organization lacks the in-house expertise to implement and maintain Atlassian Jira or another planning tool, it can outsource that platform and use a managed version. While it's more common to outsource the build phase of DevOps than it is the planning phase, it still has risks.


Tech Leaders Map Out Post-Pandemic Return to Workplace

Businesses will be turning to enterprise technology to smooth out the process of getting employees back to the workplace in the wake of the coronavirus pandemic, according to a report by Forrester Research. Technology leaders say safety will be a top priority. The information-technology research firm’s report lays out an early-stage road map for IT executives preparing to reopen corporate offices—a process that will vary by industry, but for most businesses will involve multiple stages. Chief information officers and their teams will likely be in the first wave of employees returning to the job site, said Andrew Hewitt, a Forrester analyst serving infrastructure and operations professionals. He said their initial task will be to develop a strategy for keeping employee tech tools—including PCs, mobile devices, monitors, keyboards and mice—germ-free without damaging them. “IT teams will need to have a staging area that’s outside of the front door of the office where employees can bring their home technology in and sanitize it,” Mr. Hewitt said.


Five Attributes of a Great DevOps Platform

DevOps Platform
Culture plays a significant role in establishing the guidelines while embracing DevOps in any organization. Through DevOps culture, companies seek to bring dev and ops teams into harmony to promote collaboration, automation, process improvements, continuous iterative development and deployment methodologies. But above everything else, a sound DevOps culture fundamentally solves one of IT’s biggest people problems: bridging the gap between dev and ops teams to get them to stop working in silos and have common goals. According to Gartner estimation, DevOps efforts fail 90% of the time when infrastructure and operations teams try to drive a DevOps initiative without nurturing a cultural shift in the first place. It is not just about the efficient tools or experts working; it is about the behavioral modifications and mentality necessary to effect cultural change. Hence, it is important for the firms to consider the culture of the company before selecting its tool as a potential DevOps tool for their development.


Use tokens for microservices authentication and authorization


STS enables clients to obtain the credentials they need to access multiple services that live across distributed environments. It issues digital security tokens that stay with users from the beginning of their session and continuously validate their permission for each service they call. An STS can also reissue, exchange and cancel security tokens as needed. The STS must connect with an enterprise user directory that contains all the details about user roles and responsibilities. This directory, and any connection made to it, should be properly secured as well, otherwise users could elevate their permissions just by editing policies on their own. Consider segmenting user access policies based on roles and activities. For instance, identify the individuals who have administrative capabilities. Or, you might limit a developer's access permissions to only include the services they are supposed to work on. ... Not all microservices permission and security checks are based around a human user.



Quote for the day:


"I'm not crazy about reality, but it's still the only place to get a decent meal." -- Groucho Marx