Daily Tech Digest - December 13, 2019

State of enterprise machine learning in 2020: 7 key findings


"Machine learning has the ability to, in a lot of cases, reduce errors, which can help a company make more money and save money," Oppenheimer said. "Like in jobs where there's a lot of data entry or processing, where there might be a lot of humans involved, where it's error prone and it's slightly slow, machine learning can automate a lot of that and make it more precise. It liberates those humans who are doing basic data entry to do higher level tasks, which humans are better suited for." While medium to large companies, in particular, are primarily focused on cutting costs, small companies are more interested in improving the customer experience, the report found. Smaller companies are trying to retain customers and have steady business--a problem that larger companies may not have. When thinking about how to use machine learning, optimization is a huge use case, Oppenheimer said. ... Machine learning projects will still be in early stages at organizations in 2020: 21% of businesses said they would be evaluating use cases, and 20% identified themselves as early-stage adopters in machine learning production, the report found.



Experiences from Mob Programming at an Insurance Startup


Mob programming brings the team lots of feedback. Victoor said that being together the whole time helps a lot when making technical decisions. It also gives them a lot of courage to tackle complex issues and tough refactorings. Rouve mentioned that the main benefit of mob programming is continuous sharing and learning; plus, mob programming forces the team to be aligned on best practices and coding standards. "It daily improves our work by communicating more efficiently," he said. ... During a mob session, all ideas are discussed. This is really great for problem-solving. When you are alone and you need to solve a problem, you are biased. If you are a senior developer, you may think of a solution that you have applied in the past to a similar problem. This solution may not be the simplest, or the most efficient to the current problem. When mobbing, everyone can speak up, share ideas and concerns. This is really a great way to build simpIe designs, shared by every single member of the team.


Cisco targets hyperscalers with silicon, high-end routers

fiber optics
“Moore’s law is stalling," wrote Jonathan Davidson senior vice president and general manager of Cisco Service Provider Networking in a blog about Silicon One. "While the rest of the industry slows down from the physics of traditional approaches, we have unlocked new dimensions of innovation. By rethinking silicon design entirely, we can deliver industry-leading performance today and create a ‘fast lane’ to the future. “In the past, multiple types of silicon have been used across a network and even within a single device. Feature development was inconsistent. Telemetry varied dramatically. Operators had to spend too much time and effort coordinating and testing parity of new features across the network. Now, a single silicon architecture can serve different market segments, different functions, and various form factors for a unified experience that dramatically reduces costs of operations and time-to-value for new services.” Another component of Silicon One is that it will be available for white-box vendors or hyperscalers developing their own networking systems – one of the few times Cisco has been a merchant silicon vendor in its own right. Its chip technology is typically used just in its own equipment.


Must Buy Smart Travel Gadgets for 2020

travel gadget drone
Monoprice is known for quality generic brand tech products like USB cables, wall mounts, adapters, power banks at a cheaper price point. If you are looking to add a power bank to your next travel packing list, then look out for the Monoprice holiday deals. One of the specials they have is their own brand Select Series power banks. They are currently offering 15% off for 10,000mAh, 20,000mAh, and 27,200mAh battery capacity power banks. When you are traveling with it, you are guaranteed to never run out of power since you can fully charge your iPhone or Android phones three times before the power runs out. ... A portable hard drive can be a traveler’s best friend, especially for gamers and photographers. Western Digital has everything you need for your next trip. Although cloud storage is great and useful, don’t forget that Internet connectivity is not always available everywhere in the world. You don’t want to stop taking pictures because your digital camera is running out of space. Also, it is always good to back up your pictures and other digital assets in both cloud storage and external hard drive.


Security 101: What Is a Man-in-the-Middle Attack?

(Image: peterschreiber.media/Adobe Stock)
MitM attacks are attempts to "intercept" electronic communications – to snoop on transmissions in an attack on confidentiality or to alter them in an attack on integrity. "At its core, digital communication isn't all that much different from passing notes in a classroom – only there are a lot of notes," explains Brian Vecci, field CTO at Varonis. "Users communicate with servers and other users by passing these notes. A man-in-the-middle attack involves an adversary sitting between the sender and receiver and using the notes and communication to perform a cyberattack." ... "People think they are accessing a legitimate hotspot," he says, "but, in fact, they are connecting to a device that allows the hacker to log all their keystrokes and steal logins, passwords, and credit card numbers." Another popular MitM tactic is a fraudulent browser plugin installed by a user, thinking it will offer shopping discounts and coupons, Guruswamy says. "The plugin then proceeds to watch over user's browsing traffic, stealing sensitive information like passwords [and] bank accounts, and surreptitiously sends them out-of-band," he says.


For IT pros, adding blockchain skills can pad your paycheck – by a lot

Certification / Graduate silhouette surrounded by abstract technology and blockchain imagery.
Understanding how blockchain integrates with artificial intelligence, machine learning, robotics, and IoT is seen largely as a plus for technologists at the moment. But it will be a requirement in the future as these other technologies mature and adoption rates increase. Salaries for blockchain developer or "engineer" positions are high, with median salaries in the U.S. hovering around $130,000 a year; that compares to general software developers, whose annual median pay is $105,000, according to Matt Sigelman, CEO of job data analytics firm Burning Glass Technologies. People with experience with specific blockchain iterations such as Solidity and Hyperledger Composer are in even higher demand – and that demand is increasing steadily, said Eric Piscini, a principal in the technology and banking practices at Deloitte Consulting LLP. Universities are some of the best places to learn blockchain skills, though there are online courses available from vendors as well. According to a new Gartner research note, 75% of IoT technology adopters in the U.S. have already adopted blockchain or are planning to adopt it by the end of 2020.


VISA Warns of Ongoing Cyber Attacks on Gas Pump PoS Systems 

VISA Warns of Ongoing Cyber Attacks on Gas Pump PoS Systems
PFD says that in the first incident it identified, unknown attackers were able to compromise their target using a phishing email that allowed them to infect one of the systems on the network with a Remote Access Trojan (RAT). This provided them with direct network access, making it possible to obtain credentials with enough permissions to move laterally throughout the network and compromise the company's POS system as "there was also a lack of network segmentation between the Cardholder Data Environment (CDE) and corporate network." The last stage of the attack saw the actors deploying a RAM scraper that helped them collect and exfiltrate customer payment card data. During the second and third incidents, PFD states that the threat actors used malicious tools and TTPs attributable to the financially-motivated FIN8 cybercrime group.


Implement CI/CD for Multibranch Pipeline in Jenkins

Jenkins is a continuous integration server that can fetch the latest code from the version control system (VCS), build it, test it, and notify developers. Jenkins can do many things apart from just being a Continuous Integration (CI) server. Originally known as Hudson, Jenkins is an open-source project written by Kohsuke Kawaguchi. As Jenkins is a Java-based project, before installing and running Jenkins on your machine, first, you need to install Java 8. The Multibranch Pipeline allows you to automatically create a pipeline for each branch on your Source Code Management (SCM) repository with the help of Jenkinsfile. Jenkins pipelines can be defined using a text file called Jenkinsfile. You can implement pipeline as code using Jenkinsfile, and this can be defined by using a domain-specific language (DSL). With Jenkinsfile, you can write the steps needed for running a Jenkins pipeline. The Multibranch Pipeline project type enables you to implement different Jenkinsfile for different branches of the same project.


What soft skills are most needed in IT? Toronto Women in IT winners share

What soft skills are most needed in IT? Toronto Women in IT winners share image
Technology is getting to be the easy part, with so many off-the-shelf systems that can be sold to any business leader. The talent that the IT leader brings to the table is ensuring they use discipline to not jump to a solution until the problem is fully understood, articulated, and agreed upon. It is only when everyone collaborates and then agrees on exactly what problem they are trying to solve, that a technical solution can truly be sought. ... Humility and adaptability are also important soft skills for anyone working in IT. You must be willing to admit mistakes and learn from what went wrong in order to drive the best product forward. Being too focused on perfection prevents you from doing that. You also need to be adaptable to be able to respond to changes quickly and implement feedback at all stages in the development process. You can cultivate new soft skills over the course of your career if you commit to being a life-long learner and exploring new ways of getting things done. Move beyond what is known and familiar to you in order to stretch your thinking and add to your repertoire of soft skills.


Supreme Court to Have Final Say in Oracle v. Google Java API Battle

Google holds fast that APIs are not copyrightable and the reuse of software interfaces is necessary to make systems interoperable. The issue is whether copyright law prohibits reimplementing—i.e., reusing—the software interfaces that are necessary to connect dozens of platforms to millions of applications on billions of devices. Without interfaces, your contact list cannot access your email program, which cannot send a message using the operating system, which cannot access your phone in the first place. Each is an island. Countless other examples abound. The information age depends on the reuse of interfaces. In 2018, an appeals court ruled in favor of Oracle and overturned previous rulings that favored Google. Dissatisfied with the lower court’s decision, Google petitioned the Supreme Court to hear its case. Previously, the Supreme Court had refused to hear Google’s petition but finally granted it on November 15th 2019. Given that Google filed the petition, the case is now dubbed "Google v. Oracle" instead of "Oracle v. Google".



Quote for the day:


"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." -- Reed Markham


Daily Tech Digest - December 12, 2019

Blockchain/IoT integration accelerates, hits a 'sweet spot'

Internet of Things (IoT) / security alert / wireless network management
Blockchain acts as an automated communication layer between IoT sensors as well as a repository for the data they produce and upload. For example, IoT devices in shipping containers can track not only location but monitor temperature, vibration and whether a package has been tampered with. Earlier this year, FedEx touted a proof of concept involving "sensor-based logistics," using two types of IoT sensors about the size of a stick of gum. One acts as a geo-sensor, the other automatically transmits data to a blockchain ledger. Gartner is not alone in seeing a lot of activity related to IoT and blockchain. Last month, UK-based Juniper Research said in a report that the use of blockchain and IoT tracking technology will "revolutionize" the food industry, reducing food fraud by $131 billion in five years. Currently, food-tracking systems rely on paper-based transactions to manually track goods throughout a supply chain, an inefficient system that allows records to be lost or unreconciled, according to Juniper analyst Morgane Kimmich. Additionally, paper-based records cannot be shared by all supply chain users, hindering visibility into the supply chain.



The rise of a digital underclass may be tech's next big challenge


"Any organization processing data is required to let people access this data and rectify it if necessary," she said. "But most people don't exercise those rights "We can't have safeguards only for those who have the time, expertise and money to understand what they are entitled to by law." For her, the solution lies in accountability – "because accountability means that organizations have to consider the risk that data processing poses for people," she said. A successful step forward, she said, was the implementation of GDPR. With accountability featuring among its key principles, GDPR warns organizations that they are responsible for putting in place appropriate technical measures to meet the requirements of data protection. For example, corporations may have to implement privacy-by-design, which requires tech companies to develop software that makes privacy the default mode of operation. While accountability is at the heart of GDPR, however, there is still reason to be skeptical that the new European rules will be enough to change the whole game.


GraphQL: The Future of APIs

Lin graph drawn on paper with ruler
A pre-defined schema is offered to clients by the GraphQL server. This is basically the model data that can be retrieved from the server where the schema acts as the connector between the server and the client whilst defining the process of accessing the information. The basic elements of a GraphQL schema are written down in SDL or Schema Definition Language. It explains all sorts of objects that can be requested on that specific server including the fields that they possess. The queries that are permitted to be requested such as what types of data can be fetched and the relationships between these types are defined by the schema. In fact, the GraphQL schema can be developed and an interface can be created around it with any programming language. To make sure that the server is able to respond to the query, the client can validate their query against the schema. You will be able to predict the outcomes depending on the shape of the GraphQL query that closely resembles the result. This additionally scrubs out any unwelcome surprises, for example, incorrect structure or unavailable data.


Passive optical LAN: Its day is dawning

4 catastrophe vulnerable disaster fiber optic cables
The increased speeds pose quite a predicament for companies. If the organization has Cat5 cabling, the speed is capped at 1Gbps. If Cat6 is deployed, speeds of 10Gbps can be reached but only 55 meter’s distance. If the company wants to reach the full 100M length of copper, Cat6A or higher must be used. Optical cable has no distance limitations because POL is completely passive and requires no electronics to boost the signal. Optical cabling can carry petabytes of bandwidth over long distances. Also, with optical, there’s no concern over what type of cable is being used and having the quality degrade over time. ... The project features an optical network built on Huawei’s Campus OptiX solution that simplifies the network as the architecture moves from a three-tier hierarchical design to a two-tier one. That design uses less equipment and reduces power and cooling requirements. Also, the flat, 10Gbps network obviates the need for parallel overlay networks, making it easier to manage and giving it a degree of future-proofing as the network can easily be upgraded. The all-optical network resulted in a 60% improvement in operational efficiency and a deployment time that was cut in half compared a similar network using Ethernet.


Cracking the Code to Mobile Productivity


Human-centered research underpins great design, and our teams dove deep to understand how people think, feel, and act when getting things done on the go. Research in mobile-first or mobile-only markets like India and China allowed us to study everyone from students to factory floor workers. We also leveraged pioneering work by Jaime Teevan and Microsoft researchers around “microproductivity.” Microproductivity exemplifies meeting users where they’re at: the modern world has increasingly fragmented work. Instead of solely pushing people to focus more, however, we explored whether those fragmented slices of time could be more productive with “microtasks.” A microtask is a bite-sized piece of a bigger task, like writing one paragraph instead of working on an entire Word document. Research showed microtasks increase feelings of productivity. This aligned with our observations of mobile behavior where, despite spending up to four hours a day on the phone, sessions average just 20 to 30 seconds.


5 Mobile App Design Trends You Should Know for 2020

Industry NewsT Light_870x220
Whenever a popular device or app moves to the dark side (i.e. Dark Mode), it’s always big news. Apple just recently enabled the feature on its iPhones in October of 2019. Instagram added it around the same time, too. People went nuts for it. ... One could argue that mobile apps are much easier to use than websites that are overloaded with content. However, the assumption that mobile users know what to do when they first enter an app or even that they understand the real value of it can be problematic. Because if you feel like the app is a no-brainer, then you’re going to design it that way, which may prevent some users from ever really knowing how much they can do with it. Since you don’t have the luxury of sharing as much information with users as your PWA counterparts do, I think swipeable intros are the solution. We’re starting to see a number of apps utilize these before ever inviting users to sign up or log in and I think more apps will adopt this friendlier approach to onboarding users in the years to come.


CorePlus: A Microsoft Bot Framework v4 Template

Not Helpful Answer
Microsoft has developed a number of samples to help you get started with the Bot Builder SDK v4, as well as a set of templates powered by the scaffolding tool Yeoman. This article introduces CorePlus, a Microsoft Bot Framework v4 template that I have created, based on a previous version of the Core Bot template (Node.js) supported by the generator-botbuilder Yeoman generator. It's an extended and advanced version, intended as a quick-start for setting up a Transactional, Question and Answer, and Conversational chatbot, all in one, using core AI capabilities. The template proposes a modified project structure and architecture, and provides solutions for the technical and design challenges that arise. Although some basic knowledge on Microsoft Bot Framework: Node.js SDK, LUIS, QnA Maker, Bot Framework Emulator, etc., is recommended, it's not required. The code is fully commented and the article provides lots of external links to samples, documents and other articles that can help you expand your vision and knowledge on Microsoft's framework as well as on chatbots design and development in general. Visual Studio Code is suggested as the code editor of choice. You may use any other one of your preferences, though, such as WebStorm.


Microsoft details the most clever phishing techniques it saw in 2019

fig1-phishing-poisoned-search-results.png
The first is a multi-layered malware operation through which a criminal gang poisoned Google search results. The scheme went as follows: Crooks funneled web traffic hijacked from legitimate sites to websites they controlled; The domains became the top Google search result for very specific terms; Phishers sent emails to victims linking the Google search result for that specific term; If the victim clicked the Google link, and then the top result, they'd land on an attacker-controlled website; and This website would then redirect the user to a phishing page. One might think that altering Google search results takes a gigantic amount of effort, but this was actually pretty easy, as attackers didn't target high-traffic keywords, but instead focused on gibberish like "hOJoXatrCPy." ... A third phishing trick that Microsoft wanted to highlight as a clever phishing attack this year was one that made use of a man-in-the-middle (MitM) server. Microsoft explains:"One particular phishing campaign in 2019 took impersonation to the next level.


The Future of APIs and API Monetization

One dollar bills spread out
First, the future API stack is secure. There is a lot of information and prioritization around cybersecurity and endpoint security, but sometimes API endpoints are overlooked. While OAuth is not new, the use of OAuth is essential to control fine-grained access to APIs. Second, APIs must enable personalization and experimentation. Companies need the ability to control and test API capabilities so that we can personalize search results as easily as we personalize user experience. We continuously experiment with search rankings and results to better serve our sellers and buyers. eBay is a search-driven marketplace. APIs must be designed so they can support personalization and experimentation. Third, the future API stack must be device-agnostic. APIs should understand if they are talking to a desktop or mobile device, or communicating across limited bandwidth, and adjust the fidelity of their responses accordingly. If your client pulls data from a massive data center over a LAN connection, it’s probably fine for APIs to allow access to several GBs of data.


How to develop IT leaders into future CIOs

A businessman ascends a staircase surrounded by symbols of business and business data.
Every IT leader reaches an inflection point where they have to become very good at team leadership if they want to take on more responsibility. The ability to lead and not do is more important to the CIO role than technical depth. If you don’t start to develop those skills early in your career, you will fail as a CIO. This is not about delegation, which is just handing a task off to someone else. Leadership is about empowerment and trust. ... There are a number of ways, including getting your MBA or finding a mentor who is a leader in a business function that is not IT. Early in my career, I held finance, HR and customer service roles, which had a technology flavor to them, but were not in IT. Future CIOs should get that cross functional experience early in their careers because it is harder to move in and out of IT as you advance. Understand also that sometimes you have to take a step backward to move forward. You might have to drop down a level for roles in finance or supply chain, but that move will allow you to advance later in your career.



Quote for the day:


"Do all the good you can. By all the means you can. In all the ways you can... At all the times you can." -- John Wesley


Daily Tech Digest - December 11, 2019

SR-What you need to know-image.png
Segment Routing uses a routing technology or technique known as source packet routing. In source packet routing, the source or ingress router specifies the path a packet will take through the network, rather than the packet being routed hop by hop through the network based upon its destination address. However, source packet routing is not a new concept. In fact, source packet routing has existed for over 20 years. As an example, MPLS is one of the most widely adopted forms of source packet routing, which uses labels to direct packets through a network. In an MPLS network, when a packet arrives at an ingress node an MPLS label is prepended to the packet which determines the packet’s path through the network. While SR and MPLS are similar, in that they are both source-based routing protocols, there are a few differences between them. One of these key differences lies in a primary objective of SR, which is documented in RFC7855, “The SPRING [SR] architecture MUST allow putting the policy state in the packet header and not in the intermediate nodes along the path.


Never Mind Consumers, This Was a Year of Steady Infrastructural Progress

Much of the traction that does not come from exchanges or trading has been generated decidedly in infrastructure layers in 2019. Node infrastructure provider Blockdaemon, having recognized the market’s propensity to proliferate new decentralized networks, is generating revenue across an impressive 22 such networks today and continues to grow month over month. The Graph is serving over 400 public smart contract subgraphs, with request volume clocking millions of daily data queries. Meanwhile, 3Box’s self-sovereign identity and data solution is rapidly integrating across the Ethereum ecosystem, within wallets like MetaMask and many of the new user onboarding solutions, like Portis and Authereum, and even governance experiment MolochDAO.  Blockchain’s road to mainstream adoption depends on institutional backing of businesses that support blockchain infrastructure and enable traditional investors both to capitalize and participate in digital asset networks. As such, the compliance levels of exchanges have been increasing to support institutional clients.


5G and Me: And the Golden Hour


The connected ambulance 5G network slicing concepts were demonstrated at the Mobile World Congress (MWC) in Barcelona, Spain in Feb 2019 by Dell EMC Cork Centre of Excellence (CoE). Network slicing is a type of virtual networking architecture similar to software-defined networking (SDN) and network functions virtualization (NFV) whose goal is software-based network automation. This technology allows the creation of multiple virtual networks on a shared physical infrastructure. ... The goal for the future of connected care in emergencies would be to identify the conditions for Stroke, CHF & MI; measure and score at site, predictively collect Electronic Medical Record (EMR) metadata in conjunction with specific image studies via DICOM (Digital Imaging and Communications in Medicine) and combine this with the metadata from disease-specific epidemiological studies for that geographic region — all within the “golden hour”. This combinatorial analysis at the “point of care” is the future and can prevent disability and death at scale — especially since not all the ambulance visits are emergencies.


Google proposes hybrid approach to AI transfer learning for medical imaging


In transfer learning, a machine learning algorithm is trained in two stages. First, there’s retraining, where the algorithm is generally trained on a benchmark data set representing a diversity of categories. Next comes fine-tuning, where it is further trained on the specific target task of interest. The pretraining step helps the model to learn general features that can be reused on the target task, boosting its accuracy. According to the team, transfer learning isn’t quite the end-all, be-all of AI training techniques. In a performance evaluation that compared a range of model architectures trained to diagnose diabetic retinopathy and five different diseases from chest x-rays, a portion of which were pretrained on an open source image data set, they report that transfer learning didn’t “significantly” affect performance on medical imaging tasks. Moreover, a family of simple, lightweight models performed at a level comparable to the standard architectures. In a second test, the team studied the degree to which transfer learning affected the kinds of features and representations learned by the AI models. They analyzed and compared the hidden representations in the different models trained to solve medical imaging tasks, computing similarity scores for some of the representations between models trained from scratch and those pretrained on ImageNet.


Robotic exoskeletons: Coming to a factory, warehouse or army near you, soon 

ford-exoskeleton1.jpg
Ford is thought to be one of the bigger users of exoskeletons to date, but other car makers are deploying exoskeletons, although several have opted for build-your-own rather than off the shelf systems. Hyundai debuted its own exoskeleton vest, the VEX, earlier this year. The back-worn exoskeleton "is targeted at production-line workers whose job is primarily overhead, such as those bolting the underside of vehicles, fitting brake tubes, and attaching exhausts", Hyundai said, and is expected to be rolled out at Hyundai plants. GM meanwhile has teamed up with NASA to create a robotic glove that can help increase the amount of force a wearer can exert when gripping an object or lifting up a piece of equipment for long periods, cutting the likelihood of strain or injury. Closer to home, the construction industry is also shaping up to be another significant user of exoskeletons. Builder Wilmott Dixon, for example, started piloting the ExoVest at a Cardiff site last year. One factor driving the rollout of exoskeletons in both the construction and auto industries is the possibility of cutting worker injuries as well as enabling skilled staff to work for longer.


What does it mean to think like a data scientist?

Art is a very important part of that, because what we find in a lot of our data science engagements is there's a lot of exploration of what might be possible, the realm of what's possible. So, we tried to empower the power of ‘might,’ right? That might be a good idea, that might be something, because if you don't have enough might ideas, you never have anything, any breakthrough ideas. And so, this art of thinking like a data scientist, this kind of says, 'Yeah, there's a data science process.' But think about it as guardrails, not railroad tracks. And we're going to bounce in between these things. And oh, by the way, it's really important that your business stakeholders, your subject matter experts, also understand how to think like a data scientist in this kind of non-linear creative kind of fashion, so you come up with better ideas. Because we're all in search of variables and metrics that might be better predictors of performance, right? And the data science team will have some ideas from their past experience. 



Teams are struggling to implement these new tools and 71 percent said that they are adding security technologies faster than they are adding the capacity to proactively use them. This added complexity is also compromising their threat response with 69 percent of security decision makers surveyed saying that their security team currently spends more time managing security tools than effectively defending against threats. To make matters worse, a majority of enterprises are less secure today as a result of security tool sprawl and over half (53%) say their security team has reached a tipping point where the excessive number of security tools in place adversely impacts their organization's security posture. ReliaQuest's CEO, Brian Murphy provided further insight on the report's findings, saying: "Cyber threats continue to rise and require companies to mitigate risk. While it's tempting to think another piece of technology will solve the problem, it's far from true -- in fact, this survey proves more tools can worsen enterprise security by adding complexity without improving outcomes.


There’s No Opting Out of the California Consumer Privacy Act

For starters, GDPR applies to all European data but is a minimum requirement. Individual countries in the EU have their own laws that are often more restrictive. Alternatively, CCPA is applicable to California data only and excludes any data that is already covered by a federal law, such as HIPAA or GLBA. While GDPR protects personal information (PI) that could potentially identify a specific individual -- including name, address, telephone number and Social Security number (SSN) -- CCPA goes beyond to include product purchase history, social media activity, IP addresses, and household information. Under CCPA, companies are required to include a single, clear and conspicuous "Do Not Sell My Personal Information" link on homepages. Alternatively, GDPR offers various opt-out rights, each of which requires individual action.  Under GDPR, administrative fines can reach 20 million euros or 4% of annual global revenue, whichever is greatest. For CCPA, the California Attorney General can fine companies $2,500 per violation or up to $7,500 for each intentional violation.


Google Chrome can now warn you in real time if you're getting phished


Between July and September, Google sent more than 12,000 warnings about state-sponsored phishing attacks targeting its users in the US. According to Verizon's annual cybersecurity report, phishing is the leading cause of data breaches, and Google said in August that it blocked about 100 million phishing emails every day. But phishing links don't just come in emails: They can also appear in malicious advertisements, or through direct messages on chat apps. For those of you using a Chrome browser, Google is launching an extra level of protection against phishing through real-time checks on site visits. You can turn it on by enabling "Make searches and browsing better" in your Chrome settings. This protection was already available for Chrome's Safe Browsing mode, which checked the URL of every website visited and made sure it was not on Google's block list. The block list is saved on devices and only synced every 30 minutes, allowing savvy hackers to bypass the filter by creating a new phishing URL before the list updates.


Big Changes Are Coming to Security Analytics & Operations

Nearly two-thirds (63%) of survey respondents claim that security analytics and operations are more difficult today than they were two years ago. This increasing difficulty is being driven by external changes and internal challenges. From an external perspective, 41% of security pros say that security analytics and operations are more difficult now due to rapid evolution in the threat landscape, and 30% claim that things are more difficult because of the growing attack surface. Security teams have no choice but to keep up with these dynamic external trends. On the internal side, 35% of respondents report that security analytics and operations are more difficult today because they collect more security data than they did two years ago, 34% say that the volume of security alerts has increased over the past two years, and 29% complain that it is difficult to keep up with the volume and complexity of security operations tasks. Security analytics/operations progress depends upon addressing all these external and internal issues.



Quote for the day:


"Growth is painful. Change is painful. But nothing is as painful as staying stuck somewhere you don't belong." -- Mandy Hale


Daily Tech Digest - December 10, 2019

Internet of the Senses is on the horizon, thanks to AR and VR


While smell cannot be conveyed digitally, that will change, with smell becoming an online experience by 2030, the report found. More than half (56%) of respondents said technology would evolve to the point that they would be able to smell scents in films. This same application will be applied to sales as retailers market products commercially with smell, the report found, meaning perfume commercials could emit a scent. Along the same lines as smell, humans will also be able to experience taste through devices, according to the report. Nearly 45% of respondents believe that in the next 10 years, a device could exist that digitally enhances the food someone eats. This advancement could have significant impacts on health and diet, allowing people to eat healthier foods that taste more savory than they are. This application presents another opportunity for marketing retail, as consumers could taste food products. People viewing cooking programs could even taste the food that is on screen, the report found. More than half (63%) of respondents said smartphone users would be able to feel the shape and texture of digital icons.



4 Authentication Use Cases: Which Protocol To Use?

silver platter passwords exposed authentication hacked vulnerable security breach
Where strong security is a requirement, SAML is generally a good choice. All aspects of the exchange between the RP and IdP can be digitally signed and verified by both parties. This provides high assurance that each party is communicating with the correct counterpart and not an imposter. In addition, the assertion from the IdP may be encrypted, so that HTTPS is not the only protection against attackers accessing users’ data. To add further security, signing and encryption keys may be rotated regularly. To take OIDC to the same level of security requires extra cryptographic keys, as in Open Banking extensions, and this can be relatively onerous to set up and maintain. However, OIDC benefits from the use of JSON and the simpler use by mobile apps, compared to SAML. ... Here, the preference will be for OIDC, as it is likely that a variety of devices, some not browser-based, might be involved, which normally rules out SAML. The built-in consent associated with OIDC enhances the privacy aspects of the data sharing. In addition, the use of signing and encryption may be used to strengthen the security aspects to a degree that adequately meets the requirements of handling such data.



Predictions for AI and ML in 2020

Predictions for AI and ML in 2020 image
The digital skills gap present within workforces has meant that employees are unsure about how to unleash AI’s full potential. But according to SnapLogic CTO, Craig Stewart, this problem could take a step towards being solved next year. “Transparency remains a hot topic and will continue into 2020 as companies aim to ensure transparency, visibility, and trust of AI and AI-assisted decisions,” said Stewart. “We’ll see further development and expansion of the ‘explainable AI movement,’ and efforts like it. ... Even though there are aforementioned worries regarding AI and ML possibly replacing human workers, some experts in digital innovation believe that the gradual inclusion of the technology will end up being a much more collaborative process. “Despite fears that it will replace human employees, in 2020 AI and machine learning will increasingly be used to aid and augment them,” said Felix Gerdes, Insight UK‘s director of digital innovation services. “For instance, customer service workers need to be certain they are giving customers the right advice.


The Future of Spring Cloud's Hystrix Project


Spring Cloud Hystrix Project was built as a wrapper on top of the Netflix Hystrix library. Since then, It has been adopted by many enterprises and developers to implement the Circuit Breaker pattern. In November 2018 when Netflix announced that they are putting this project into maintenance mode, it prompted Spring Cloud to announce the same. Since then, no further enhancements are happening in this Netflix library. In SpringOne 2019, Spring announced that Hystrix Dashboard will be removed from Spring Cloud 3.1 version which makes it officially dead. As the Circuit Breaker pattern has been advertised so heavily, many developers have either used it or want to use it, and now need a replacement. Resilience4j has been introduced to fulfill this gap and provide a migration path for Hystrix users. Resilience4j has been inspired by Netflix Hystrix but is designed for Java 8 and functional programming. It is lightweight compared to Hystrix as it has the Vavr library as its only dependency. Netflix Hystrix, by contrast, has a dependency on Archaius which has several other external library dependencies such as Guava and Apache Commons.


Dubai’s Kentech kicks off digital transformation drive


“Kentech has suffered with poor IT adoption partnerships in the past, so we needed something that was world-class. We wanted something that our business would love and use.” Kentech launched its tendering process early this year and by July it had selected Oracle as its cloud partner. “During the tendering process, we found that our business was closely aligned to construction,” said O’Gara. “Some of our requirements were quite complex, especially when dealing with reimbursable and fixed-price work – they can chop and change on a daily basis. We found that Oracle could meet those complex requirements. “For us, it was the ERP and budgeting models that were the differentiator. We’ve now started implementation and we’re going to go live at the end of this year with the first phase. We’re a project-based business, so we need to be able to scale up and down very quickly. The cloud model suits us perfectly as a business because we can be flexible, rather than going all out and saying ‘I need 10 more servers’.”


Hybrid multi-cloud a must for banks

Banks operating under a hybrid multi-cloud model predictably and optimally manage finances as cost models shift from fixed to variable. Storing data on site with traditional facilities is expensive and holds banks in long-term contracts for a set amount of data storage. Banks over-resource infrastructure and storage leading to payment of unnecessary resources. Hybrid cloud models allow banks to scale as needed, purchasing only what is immediately utilised using a subscription-based model offered by most CSPs. Procurement and implementation in the traditional way is slow and thus capacity management and a degree of guessing are used, resulting in over-capitalised systems offering little ROI. As the cloud allows for scaling on a pay-as-you-go model, the spend is greatly optimised. For example, UBS’s risk management platform is powered by Microsoft Azure, saving the financial service company 40 percent on infrastructure costs, increasing calculation times by 100 percent, and gaining near infinite scale.


The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice

The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice
Today, Waymo wants to bring self-driving technology to the world to not only to move people around, but to reduce the number of crashes. Its autonomous vehicles are currently shuttling riders around California in self-driving taxis. Right now, the company can’t charge a fare and a human driver still sits behind the wheel during the pilot program. Google signaled its commitment to deep learning when it acquired DeepMind. Not only did the system learn how to play 49 different Atari games, the AlphaGo program was the first to beat a professional player at the game of Go. Another AI innovation from Google is Google Duplex. Using natural language processing, an AI voice interface can make phone calls and schedule appointments on your behalf. ... Another innovative way Amazon uses artificial intelligence is to ship things to you before you even think about buying it. They collect a lot of data about each person’s buying habits and have such confidence in how the data they collect helps them recommend items to its customers and now predict what they need even before they need it by using predictive analytics.


Verizon kills email accounts of archivists trying to save Yahoo Groups history

According to the Archive Team: "As of 2019-10-16 the directory lists 5,619,351 groups. 2,752,112 of them have been discovered. 1,483,853 (54%) have public message archives with an estimated number of 2.1 billion messages (1,389 messages per group on average so far). 1.8 billion messages (86%) have been archived as of 2018-10-28." Verizon has issued a statement to the group supporting the Archive Team, telling concerned archivists that "the resources needed to maintain historical content from Yahoo Groups pages is cost-prohibitive, as they're largely unused". The telecoms giant also said the people booted from the service had violated its terms of service and suggested the number of users affected was small. "Regarding the 128 people who joined Yahoo Groups with the goal to archive them – are those people from Archiveteam.org? If so, their actions violated our Terms of Service. Because of this violation, we are unable reauthorize them," Verizon said. 



Open source refers to an online project that is publicly accessible for anyone to modify and share, as long as they provide attribution to the original developer, reported TechRepublic contributor Jack Wallen in What is open source?. Since its release over 20 years ago, open source has changed the internet. Without open source, the online experience would be "a far different place; much more limited, expensive, less robust, less feature-driven and less scalable. Big name companies would be much less powerful and successful as well in the absence of open source software," wrote Scott Matteson in How to decide if open source or proprietary software solutions are best for your business. ... Major tech companies have set their sights on open source development, with Microsoft's acquisition of GitHub and IBM's acquisition of Red Hat. However, developers are concerned about the impact these tech giants could have on the open source community, the report found.  Nearly 41% of respondents said they were concerned about the level of involvement from major tech players in open source. The main concerns they cited involved possible self-serving intentions from big companies, the use of restrictive licenses that give large organizations unfair competitive advantage, and overall trust of large corporations, the report found.


Is cloud migration iterative or waterfall?

Is cloud migration iterative or waterfall?
Cloud migration projects have two dimensions. First, they are short-term sprints where a project team migrates a handful of application workloads and data stores to a single or multicloud. They act independently, with little architectural oversite or governance, and last between two to six months. Second, is the longer-term architecture including security, governance, management, and monitoring. This may be directed by a cloud business office, the office of the CTO, or a master cloud architect. This set of processes goes on continuously. Here is the problem. The former seems to overshadow the latter, meaning that we’re moving to the cloud using ad hoc and decoupled sprints, all with little regard for common security and governance layers and any sort of management and monitoring. The result is something we’ve talked about here before: complexity. Although we built something that seems to work, applications migrated from one platform to another are deployed with different technology stacks.



Quote for the day:


"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah


Daily Tech Digest - December 09, 2019

The PC was supposed to die a decade ago. Instead, this happened


Not all that long ago, tech pundits were convinced that by 2020 the personal computer as we know it would be extinct. You can even mark the date and time of the PC's death: January 27, 2010, at 10:00 A.M. Pacific Time, when Steve Jobs stepped onto a San Francisco stage to unveil the iPad. The precise moment was documented by noted Big Thinker Nicholas Carr in The New Republic with this memorable headline: "The PC Officially Died Today." ... And so, here we are, a full decade after the PC's untimely death, and the industry is still selling more than a quarter-billion-with-a-B personal computers every year. Which is pretty good for an industry that has been living on borrowed time for ten years. Maybe the reason the PC industry hasn't suffered a mass extinction event yet is because they adapted, and because those competing platforms weren't able to take over every PC-centric task. So what's different as we approach 2020? To get a proper before-and-after picture, I climbed into the Wayback Machine and traveled back to 2010.


Netflix open sources data science management tool

Netflix open sources data science management tool
Netflix has open sourced Metaflow, an internally developed tool for building and managing Python-based data science projects. Metaflow addresses the entire data science workflow, from prototype to model deployment, and provides built-in integrations to AWS cloud services.  Machine learning and data science projects need mechanisms to track the development of the code, data, and models. Doing all of that manually is error-prone, and tools for source code management, like Git, aren’t well-suited to all of these tasks. Metaflow provides Python APIs to the entire stack of technologies in a data science workflow, from access to the data through compute resources, versioning, model training, scheduling, and model deployment. ... Metaflow does not favor any particular machine learning framework or data science library. Metaflow projects are just Python code, with each step of a project’s data flow represented by common Python programming idioms. Each time a Metaflow project runs, the data it generates is given a unique ID. This lets you access every run—and every step of that run—by referring to its ID or user-assigned metadata.



AppSec in the Age of DevSecOps

laptop-in-dark
Application security as a practice is dynamic. No two applications are the same, even if they belong in the same market domain, presumably operating on identical business use-cases. Some (of the many) factors that cause this variance include technology stack of choice, programming style of developers, a culture of the product engineering team, priority of the business, platforms used, etc. This consequentially results in a wide spectrum of unique customer needs. Take penetration testing as an example. This is a practice area that is presumably well-entrenched, both as a need and as an offering in the application security market. However, in today's age, even a singular requirement such as this could make or break an initial conversation. While, for one prospect, the need could be to conduct the test from a compliance (only) perspective, another's need could stem from a proactive software security initiative. There are many others who have internal assessment teams and often look outside for a third-party view.


Data centers in 2020: Automation, cheaper memory

prediction predict the future crystal ball hand holding crystal ball by arthur ogleznev via unsplash
Storage-class memory is memory that goes in a DRAM slot and can function like DRAM but can also function like an SSD. It has near-DRAM-like speed but has storage capabilities, too, effectively turning it into a cache for SSD. Intel and Micron were working on SCM together but parted company. Intel released its SCM product, Optane, in May, and Micron came to market in October with QuantX. South Korean memory giant SK Hynix is also working on a SCM product that’s different from the 3D XPoint technology Micron and Intel use as well. ... Remember when everyone was looking forward to shutting down their data centers entirely and moving to the cloud? So much for that idea. IDC’s latest CloudPulse survey suggests that 85% of enterprises plan to move workload from public to private environments over the next year. And a recent survey by Nutanix found 73% of respondents reported that they are moving some applications off the public cloud and back on-prem. Security was cited as the primary reason. And since it’s doubtful security will ever be good enough for some companies and some data, it seems the mad rush to the cloud will likely slow a little as people become more picky about what they put in the cloud and what they keep behind their firewall.


Batch Goes Out the Window: The Dawn of Data Orchestration

Data.orchestration
Add to the mix the whole world of streaming data. By open-sourcing Kafka to the Apache Foundation, LinkedIn let loose the gushing waters of data streams. These high-speed freeways of data largely circumvent traditional data management tooling, which can't stand the pressure. Doing the math, we see a vastly different scenario for today's data, as compared to only a few years ago. Companies have gone from relying on five to 10 source systems for an enterprise data warehouse to now embracing dozens or more systems across various analytical platforms. Meanwhile, the appetite for insights is greater than ever, as is the desire to dynamically link analytical systems with operational ones. The end result is a tremendous amount of energy focused on the need for ... meaningful data orchestration. For performance, governance, quality and a vast array of business needs, data orchestration is taking shape right now out of sheer necessity. The old highways for data have become too clogged and cannot support the necessary traffic. A whole new system is required. To wit, there are several software companies focused intently on solving this big problem. Here are just a few of the innovative firms that are shaping the data orchestration space.


jobs.jpg
With the majority of companies looking for expertise in the three- to 10-year range, Robinson said they must change their traditional recruitment/training tactics. "The technical skill supply is far less than the demand, so companies are not going to simply be able to meet their exact needs on the open market,'' he said. "There must be a willingness to look outside the normal sources for technical skill, and there must be a willingness to invest in training to get workers up to speed once they are in house." The trend is toward specialization, "but this certainly introduces a financial challenge,'' he said, since most companies cannot afford to build large teams of specialists. So depending on the company's strategy, they may lean more on generalists or they may explore different mixes of internal/external talent. "Even for tech workers who specialize, knowledge across the different areas of IT is necessary for efficient operation of complex systems,'' Robinson said. The primary approach most tech workers are taking for career growth is to deepen their skills in their area of expertise, he said. But they must have knowledge in other areas beyond this, Robinson stressed, especially as tech workers move from a junior level to an architect level.


Seagate doubles HDD performance with multi-actuator technology

big data / data center / server racks / storage / binary code / analytics
The technology is pretty straightforward. Say you have four platters in a disk drive. The actuator controls the drive heads and moves them all in unison over all four platters. Seagate's multi-actuator makes two independent actuators out of one, so in a six-platter drive, the two actuators cover three platters each. ... While SSDs have buried HDDs in terms of performance, they simply can’t match HDDs for capacity. Of course there are multi-terabyte SSDs available, but they cost many times more than the 12TB/14TB HDD drives that Seagate and its chief competitor Western Digital offer. And data centers are not about to go all-SSD yet, if ever. So there's definitely a place for faster HDDs in the data center. Microsoft has been testing Exos 2X14 enterprise hard drives with MACH.2 technology to see if it can maintain the IOPS required for some of Microsoft’s cloud services, including Azure and the Microsoft Exchange Online email service, while increasing available storage capacity per data-center slot.


Synchronizing Cache with the Database using NCache

Caching improves the performance of web applications by reducing resource consumption in applications. It achieves this by storing page output or relatively stale application data across the HTTP requests. Caching makes your site run faster and provide better end-user experience. You can take advantage of caching to reduce the consumption of server resources by reducing the server and database hits. The cache object in ASP.NET can be used to store application data and reduce the expensive server (database server, etc.) hits. As a result, your web page is rendered faster. When you are caching application data, you would typically have a copy of the data in the cache that also resides in the database. Now this duplication of data (both in the database and in the cache) introduces data consistency issues. The data in the cache must be in sync with the data in the database. You should know how data in the cache can be invalidated and removed when any change occurs in the database in real-time.


Coders are the new superheroes of natural disasters

screen-shot-2019-12-02-at-5-33-21-pm.png
It will be a launching point for open-source programs like Call for Code and "Clinton Global Initiative University" and will support the entire process of creating solutions for those most in need. Call for Code is seeking solutions for this year's challenge and coders can go to the 2019 Challenge Experience to join. Call for Code unites developers and data scientists around the world to create sustainable, scalable, and live-saving open source technologies via the power of Cloud, AI, blockchain and IoT tech. Clinton Global Initiative University partners with IBM and commits to inspiring university students to harness modern, emerging and open-source technologies to develop solutions for disaster response and resilience challenges. "Technology skills are increasingly valuable," Krook said, "even for students who don't intend to become professional software developers. For computer science students, putting the end user first, and empathizing with how they hope to use technology to solve their problems—particularly those that represent a danger to their health and well-being—will help them understand how to build high-quality and well-designed software."



There is a widespread belief that rules, structure and processes inhibit freedom and that organizations that want to build a culture of autonomy and performance need to avoid them like the plague. ... There are times in history when this has happened to entire societies. When the leaders of the French Revolution abolished the laws of the "Ancien Regime", the result was terror. When Russia descended into chaos after the revolution of 1917, the result was civil war and the emergence of a tyrant, Stalin, who began a sustained terror of his own. When the Weimar Republic in Germany failed in the 1920’s, the result was Hitler. In our own time, as social structures weaken, strongmen like Putin or Erdogan come to power and impose personal rules of their own. Societies which abolish laws become chaotic. In chaos, there is absolute freedom. As the philosopher Hegel observed, absolute freedom is not freedom at all, but a playground for the arbitrary exercise of power, which ends in terror. In terror, only a few are free, and many are slaves.



Quote for the day:


"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche


Daily Tech Digest - December 08, 2019

Machine learning use cases

A brain with two segments, one like a neural network drawing
Machine learning has many potential uses, including external (client-facing) applications like customer service, product recommendation, and pricing forecasts, but it is also being used internally to help speed up processes or improve products that were previously manual and time-consuming. You’ll notice these two types throughout our list of machine learning use cases below. ... This consumer-based use for machine learning applies mostly to smart phones and smart home devices. The voice assistants on these devices use machine learning to understand what you say and craft a response. The machine learning models behind voice assistants were trained on human languages and variations in the human voice, because it has to translate what it hears into words and then make an intelligent, on-topic response. ... This machine–based pricing strategy is most known in the travel industry. Flights, hotels, and other travel bookings usually have a dynamic pricing strategy behind them. Consumers know that the sooner they book their trip the better, but they may not realize that the actual price changes are made via machine learning.



Agile vs DevOps Infographic: Everything You Need To Know

DevOps Methodology, Lifecycle and Best Practices explained
Agile methodology has been widely used by enterprises all across the globe. Software development teams have been using Agile for over a decade now because it provides efficient methods and techniques to build software. Agile methodology is centered around the idea of continuous iteration of development and testing in the software development lifecycle (SDLC). It focuses on iterative, incremental, and evolutionary software development. Agile methodology enables cross-functional teams to collaborate together to deliver value faster, with greater flexibility, quality, and predictability.  ... DevOps is a way of deploying applications to production. It is a deployment model that emphasizes integration, communication, and collaboration among the development and operations teams to enable rapid deployments of software. DevOps focuses on allowing teams to deploy code faster to the production environment, using automated tools and processes. Automation is a critical element of DevOps that improves organizations to deliver applications and services rapidly.


Machine Learning as a Service (MLaaS) is the Next Trend No One is Talking About


Cloud service providers have the ability to ease the application of machine learning into everyday business use. Amazon, Google, and Microsoft all have preliminary services that enable machine learning functions. Speech recognition, sentiment analysis, chatbot enhancement, image and video analysis, and classification and regression services are some of the assistive solutions that are currently provided (“Comparing Machine Learning as a Service: Amazon, Microsoft Azure, Google Cloud AI, IBM Watson.”). As the application of machine learning becomes more valuable, the tech giants will continue to invest in building on top of their machine learning as a service (MLaaS) offerings. By utilizing these services from a cloud provider, companies can expect to save time, money, and resources that would have been invested into creating their own in house solutions. By choosing to use MLaaS, companies can be quicker to market and engage the latest developments in the space, without taking on an extraordinary amount of risk.


4 Ways to Successfully Scale AI and Machine Learning for Businesses

Although it’s apparent that there is a shortage in data science talent on the job market, and hiring for this type of role can be challenging, AI and ML success requires much more than the skills of a data scientist. I’m talking about model building, data prep, training, and interference. If you’re serious about scaling and reaping the benefits that AI and ML have to offer, you should be looking to work with ML architects, data engineers, and operations managers. This piece goes into much more detail about how to structure your data science team. The next challenge is to organize and scale your team effectively. Do you have staff trained with the necessary skills in-house to move this project from concept to completion? Do you build these skills through retraining and hiring? Or will you contract a team to help in completing this project in a pre-determined amount of time? Building up your current team’s skillsets will help you to scale on a long-term basis. Whereas, third-party contractors will help to get your project off the ground with speed and efficiency.


security-camera
A WebSocket is a bi-directional computing communication channel between a client and server, which is great when the communication is low-latency and high-frequency. Websockets are mainly used in joint, event-driven, or live apps, where the speed of the conventional client-server request-response model doesn’t meet the credentials. Examples include team dashboards and stock trading applications. ... STOMP (Simple Text Oriented Messaging Protocol) was born as an alternative to existing open messaging protocols, like AMQP, to enterprise message brokers from scripting languages like Ruby, Python, and Perl with a subset of common message operations. ... Happily, for Java developers, Spring supports the WebSocket API, which implements raw WebSockets, WebSocket emulation through SocksJS (when WebSockets are not supported), and publish-subscribe messaging through STOMP. In this tutorial, you will learn how to use the WebSockets API and configure a Spring Boot message broker. Then we will authenticate a JavaScript STOMP client during the WebSocket handshake and implement Okta as an authentication and access token service.


Processing Geospatial Data at Scale With Databricks

The first challenge involves dealing with scale in streaming and batch applications. The sheer proliferation of geospatial data and the SLAs required by applications overwhelms traditional storage and processing systems. Customer data has been spilling out of existing vertically scaled geo databases into data lakes for many years now due to pressures such as data volume, velocity, storage cost, and strict schema-on-write enforcement. While enterprises have invested in geospatial data, few have the proper technology architecture to prepare these large, complex datasets for downstream analytics. Further, given that scaled data is often required for advanced use cases, the majority of AI-driven initiatives are failing to make it from pilot to production. ... Databricks offers a unified data analytics platform for big data analytics and machine learning used by thousands of customers worldwide. It is powered by Apache Spark™, Delta Lake, and MLflow with a wide ecosystem of third-party and available library integrations. Databricks UDAP delivers enterprise-grade security, support, reliability, and performance at scale for production workloads.


QR code scams rise in China, putting e-payment security in spotlight

QR codes, originally invented by Japan’s car parts maker Denso, have become so ubiquitous in China that even street hawkers now use them for electrical payments, as seen here in Beijing. Photo: Weibo
The suspect had replaced legitimate codes created by merchants with fake ones embedded with a virus programmed to steal the personal information of consumers. Scammers have also been profiting handsomely from the mainland’s multibillion dollar bike-sharing industry. By replacing the original QR code used to unlock the bicycle with a fake one, they have been able to cheat users into transferring their money into their own bank accounts. The proliferation of this type of crime has been made possible by the explosion of mobile payments in China, as the concept of a cashless society moves ever closer to becoming a reality. Nowhere is this shift more evident than in the abundance of QR codes – a type of barcode, or machine-readable image – that allow consumers to make small payments by simply scanning the image and confirming the transaction. QR codes were invented in 1994 by Denso Wave, a unit of Japan’s largest automotive parts maker, to allow for quick scanning when tracking vehicles during the assembly process. From the car factory, the codes later spread to broader usage, encompassing everything from consumer purchases to social media.


The hidden risks of cryptojacking attacks

The rise of cryptojacking has followed the same upward trajectory as the value of cryptocurrency. Suddenly, digital “cash” is worth actual money and hackers, who usually have to take several steps to generate income from stolen data, have a direct path to cashing in on their exploits. But if all the malware does is sit quietly in the background generating cryptocurrency, is it really much of a danger? In short, yes – for two reasons. In fundamental terms, cryptojacking attacks are about stealing… in this case energy and system resources. The energy might be minimal (more about that in a moment) but using resources slows the performance of the overall system and actually increases wear and tear on the hardware, reducing its lifespan, resulting in frustration, inefficiency and increased costs. Much more importantly however, a cryptojacking-compromised system is a flashing warning sign that a vulnerability exists. Often, infiltrating a system to cryptojack involves opening access points that can be easily leveraged to steal other types of data.


Developing Deep Learning Models for Chest X-rays with Adjudicated Image Labels


For very large datasets consisting of hundreds of thousands of images, such as those needed to train highly accurate deep learning models, it is impractical to manually assign image labels. As such, we developed a separate, text-based deep learning model to extract image labels using the de-identified radiology reports associated with each X-ray. This NLP model was then applied to provide labels for over 560,000 images from the Apollo Hospitals dataset used for training the computer vision models. To reduce noise from any errors introduced by the text-based label extraction and also to provide the relevant labels for a substantial number of the ChestX-ray14 images, approximately 37,000 images across the two datasets were visually reviewed by radiologists. These were separate from the NLP-based labels and helped to ensure high quality labels across such a large, diverse set of training images.


Injection Vulnerabilities – 20 Years and Counting

Lack of awareness: it’s common to see junior software engineers writing code that is vulnerable. The “injection” concept may be not very intuitive for them, and they deliver vulnerable code because it’s the easiest / fastest way for them to implement a specific component. Rush: we all know how stressful and demanding modern software development environments can be. Concepts like Agile and CI/CD are great for fast delivery, but when developers are focused only on delivering the code, they might forget to check for security issues. Complexity: APIs and modern apps are complex. A modern app, like Uber for example, might look very simple from the UX (user experience) perspective, but on the backend there are many databases and microservices that communicate between them behind the scenes. In many cases, it’s hard to track which inputs come from the client itself and require more security attention (as filtering and scanning), and which inputs are internal to the system.



Quote for the day:


"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell