Daily Tech Digest - July 09, 2020

Diversity in tech: 3 stories of perseverance and success

It is easy to fall into comfortable patterns. We train for sports by developing muscle memory using repetition to engrain patterns in our brains. It takes an average of 66 days for a behavior to become a habit, and it can require 10 times the effort. Simply stated, hard work and dedication are the foundations for learning, whether learning a new language, improving your golf swing, or rethinking workforce demographics. Organizations are especially resistant to change, requiring cross-organizational commitment and a compelling business imperative. An uncompromising focus on change must cascade throughout an organization and be measured, managed, and reinforced. This resistance to change may explain, at least in part, why the underrepresentation of people of color in technology companies has shown little improvement since 2014. Ideally, the representation of blacks in technology should reflect the overall population, but it does not. According to the Census Bureau, blacks make up 13.4% of the U.S. population but account for only 5% of the workforce at technology companies, with women of color representing even less at 1%.

Pen Testing ROI: How to Communicate the Value of Security Testing

Defining the ROI of pen testing has its nuances, as there are seemingly no tangible results that come directly from the investment. When implementing a pen-testing strategy, you're actively avoiding a breach that could cost your organization money. But the cost of a breach is the most obvious data point for measuring ROI, and those estimates vary widely. My advice? Work toward maturing your security program to a point where the engagement with pen testers is focused on ensuring the effectiveness of existing controls and security touchpoints in your development life cycle — not solely to check a compliance box or single-handedly prevent a breach. Leveraging pen testing throughout the development life cycle can help identify issues in development before deployment rather than the costly discovery of vulnerabilities at a later date. Second, identify metrics, not measurements: Business decisions are often made using measurements, instead of metrics. But in most cases, driving decisions based on measurements (or raw data) can be misleading and end up with business leaders focusing time, effort, and budget on the wrong activities.

How to build a data architecture to drive innovation—today and tomorrow

To scale applications, companies often need to push well beyond the boundaries of legacy data ecosystems from large solution vendors. Many are now moving toward a highly modular data architecture that uses best-of-breed and, frequently, open-source components that can be replaced with new technologies as needed without affecting other parts of the data architecture. The utility-services company mentioned earlier is transitioning to this approach to rapidly deliver new, data-heavy digital services to millions of customers and to connect cloud-based applications at scale. For example, it offers accurate daily views on customer energy consumption and real-time analytics insights comparing individual consumption with peer groups. The company set up an independent data layer that includes both commercial databases and open-source components. Data is synced with back-end systems via a proprietary enterprise service bus, and microservices hosted in containers run business logic on the data. ... Exposing data via APIs can ensure that direct access to view and modify data is limited and secure, while simultaneously offering faster, up-to-date access to common data sets. 

Software Techniques for Lemmings

The performance of a system with thousands of threads will be far from satisfying. Threads take time to create and schedule, and their stacks consume a lot of memory unless their sizes are engineered, which won't be the case in a system that spawns them mindlessly. We have a little job to do? Let's fork a thread, call join, and let it do the work. This was popular enough before the advent of <thread> in C++11, but <thread> did nothing to temper it. I don't see <thread> as being useful for anything other than toy systems, though it could be used as a base class to which many other capabilities would then be added. Even apart from these Thread Per Whatever designs, some systems overuse threads because it's their only encapsulation mechanism. They're not very object-oriented and lack anything that resembles an application framework. So each developer creates his own little world by writing a new thread to perform a new function. The main reason for writing a new thread should be to avoid complicating the thread loop of an existing thread. Thread loops should be easy to understand, and a thread shouldn't try to handle various types of work that force it to multitask and prioritize them, effectively acting as a scheduler itself.

Cloud Security Mistakes Which Everyone Should Avoid

Cloud can be accessed virtually, by anyone who is possessing proper credentials, makes it convenient and vulnerable at the same time. Unlike physical servers that limit a number of admin users, and have more strict access permissions, cloud servers can never provide that level of security. That’s why many small business owners around the world still choose web hosting services that operate on physical servers, especially since you’re able to have a whole server just for your website if you choose a dedicated hosting plan. But virtual servers are much easier to access because of their access permissions that could sometimes be misused. Controlling access to data kept on the cloud is a tricky balancing act between giving people access to the tools they require to get the job done and protecting their data from getting into the wrong hands. Efficiently managing the data requires a comprehensive policy that not only controls who can access what data and from where, but involves monitoring to determine who accesses data, when, and from where to detect potential breaches or any inappropriate access. Therefore, it is vital to educate on how to secure their cloud sessions, including avoiding public networks and effective password management.

The Modern Hybrid App Developer

One of the most frustrating parts about building apps is the massive headache of releasing and waiting for new updates in the app stores. Because hybrid app developers build a big chunk of their app using web technology, they are able to update their app’s logic and UI in realtime any time they want, in a way that is allowed by Apple and Google because it’s not making binary changes (as long those updates continue to follow other ToS guidelines). Using a service like Appflow, developers can set up their native Capacitor or Cordova apps to pull in realtime updates across a variety of deployment channels (or environments), and even further customize different versions of their app for different users. Teams use this to fix bugs in their production apps, run a/b tests, manage beta channels, and more. Some services, like Appflow, even support deploying directly to the Apple and Google Play store, so teams can automate both binary and web updates. This is a major super power that hybrid app developers have today that native developers do not!

HSBC customers targeted in new smishing scam

The text phishing, or smishing campaign begins with a text message purporting to come from HSBC, informing its target that “a new payment has been made” through the HSBC app on their smartphone device. Targets are informed that if they were not responsible for this payment, they should visit a website to validate their bank account. To the untrained eye, the website link – security.hsbc.confirm-systems.com – could conceivably be legitimate, but obviously should on no account be opened. Victims will then be directed to a fake landing page and asked to input their username and password, along with a series of verification steps, on a fraudulent website that uses HSBC branding. The site will also try to weed out specific account details and other personally identifiable financial information (PIFI) from its targets. Griffin Law, which works with a number of accountancy groups and financial support teams in the London area, said it had seen a clear spike in reports of the scam, with almost 50 of its customers telling it they had received the smish so far. A number of them said they did not have any HSBC apps installed on their devices, which suggests the scam is quite indiscriminate in its targeting.

Card Skimmer Found Hitting Vulnerable E-Commerce Sites

Despite the large pool of potential targets, Malwarebytes has only been able to identify a few victims. "We found over a dozen websites that range from sports organizations, health, and community associations to (oddly enough) a credit union. They have been compromised with malicious code injected into one of their existing JavaScript libraries," Segura says. Some historical evidence of other victims who have been hit in the past was uncovered as part of his research, he says, but they have since been remediated. The total number of targets number is not available. The skimmer steals payment card numbers and tries to also swipe passwords, although the latter activity is not correctly implemented and does not always work, according to Malwarebytes. Segura says the skimmer is not that different from others currently operating in how it collects and exfiltrates data. The novelty is that it was only found on ASP.NET websites. "The skimmer is embedded in an existing JavaScript library used by a victim site. There are variations on how the code is structured but overall, it performs the same action of contacting remote domains belonging to the threat actor," Segura says.

MongoDB is subject to continual attacks when exposed to the internet

After seeing how consistently database breaches were occurring, Intruder planted honeypots to find out how these attacks happen, where the threats are coming from, and how fast it takes place. Intruder set up a number of unsecured MongoDB honeypots across the web, each filled with fake data. The network traffic was monitored for malicious activity and if password hashes were exfiltrated and seen crossing the wire, this would indicate that a database was breached. The research shows that MongoDB is subject to continual attacks when exposed to the internet. Attacks are carried out automatically and indiscriminately and on average an unsecured database is compromised less than 24 hours after going online. ... Attacks originated from locations all over the globe, though attackers routinely hide their true location, so there’s often no way to tell where attacks are really coming from. The fastest breach came from an attacker from Russian ISP Skynet and over half of the breaches originated from IP addresses owned by a Romanian VPS provider.

How data is fundamental to manufacturing’s digital transformation

The key to creating and deploying an effective data strategy comes down to three factors: sponsorship, a standardised platform and robust governance. Sponsorship is vital, according to Greg Hanson, particularly in larger organisations where buy-in can be more difficult to achieve. “Additionally, the successful deployment of that strategy requires engagement with the organisation as a whole, and a cultural acceptance of responsibility regarding data given GDPR and privacy laws,” he added. Helping to drive this combination of board-level sponsorship and enterprise-wide engagement are Chief Data Officers, newly-created executive roles tasked with deploying and monitoring the effectiveness of data strategies and the adoption of modern, cloud-based architectures – the foundation of many industrial digital transformation initiatives. “There are so many technologies readily available in the cloud space now that companies face the risk of ‘cloud sprawl’ which degrades the impact of their digital transformation and data management,” Hanson continued.

Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else." -- Dr. Ken Blanchard

Daily Tech Digest - July 08, 2020

Why Are Real IT Cyber Security Improvements So Hard to Achieve?

It’s easy to point fingers in various directions to try to explain why we have done such a poor job of improving IT security over the years. Unfortunately, most of the places at which blame is typically directed bear limited, if any, responsibility for our lack of security. It’s hard to deny that software is more complex today than it was 10 or 20 years ago. The cloud, distributed infrastructure, microservices, containers and the like have led to software environments that change faster and involve more moving pieces. It’s reasonable to argue that this added complexity has made modern environments more difficult to secure. There may be some truth to this. But, on the flipside, you have to remember that the complexity brings new security benefits, too. In theory, distributed architectures, microservices and other modern models make it easier to isolate or segment workloads in ways that should mitigate the impact of a breach. Thus, I think it’s simplistic to say that the reason IT cyber security remains so poor is that software has grown more complex, and that security strategies and tools have not kept pace. You could argue just as plausibly that modern architectures should have improved security.

Facebook is recycling heat from its data centers to warm up these homes

The tech giant stressed that the heat distribution system it has developed uses exclusively renewable energy. The data center is entirely supplied by wind power, and Fjernvarme Fyn's facility only uses pumps and coils to transfer the heat. As a result, the project is expected to reduce Odense's demand for coal by up to 25%. Although Facebook is keen to use the heat recovery system in other locations, the company didn't reveal any plans to export the technology just yet. "Our ability to do heat recovery depends on a number of factors, so we will evaluate them first," said Edelman. For example, the proximity of the data center to the community it can provide heat for will be a key criteria to consider.  Improving data centers' green credentials has been a priority for technology companies as of late. Google recently showcased a new tool that can match the timing of some compute tasks in data centers to the availability of lower-carbon energy.  The platform can shift non-urgent workloads to times of the day when wind or solar sources of energy are more plentiful. The search giant is aiming for "24x7 carbon-free energy" in all of its data centers, which means constantly matching facilities with sources of carbon-free power.

Understanding When to Use a Test Tool vs. a Test System

A system is a group of parts that interact in concert to form a unified whole. A system has an identifiable purpose. For example, the purpose of a school system is to educate students. The purpose manufacturing system is to produce one or many end products. In turn, the purpose of a testing system is to ensure that features and functions within the scope of the software's entire domain operate to specified expectations. Typically a testing system is made of parts that test specific aspects of the software under consideration. However, unlike a testing tool, which is limited in scope, a testing system encompasses all the testing that takes place within the SDLC. Thus a testing system needs to support all aspects of software testing throughout the SDLC in terms of execution, data collection, and reporting. First and foremost, a testing system needs to be able to control testing workflows. This means that the system can execute tests according to a set of predefined events. For example, when new code is committed to a source control repository, or when a new or updated component is ready to be added to an existing application.

Wi-Fi 6E: When it’s coming and what it’s good for

There’s so much confusion around all the 666 numbers, it’ll scare you to death. You’ve got Wi-Fi 6, Wi-Fi 6E – and Wi-Fi 6 still has additional enhancements coming after that, with multi-user multiple input, multiple output (multi-user MIMO) functionalities. Then there’s the 6GHz spectrum, but that’s not where Wi-Fi 6 gets its name from: It’s the sixth generation of Wi-Fi. On top of all that, we are just getting a handle 5G and there already talking about 6G – seriously, look it up – it's going to get even more confusing. ... The last time we got a boost in UNII-2 and UNII-2 Extended was 15 years ago and smartphones hadn’t even taken off yet. Now being able to get 1.2GHz is enormous. With Wi-Fi 6E, we’re not doubling the amount of Wi-Fi space, we're actually quadrupling the amount of usable space. That’s three, four, or five times more spectrum, depending on where you are in the world. Plus you don't have to worry about DFS [dynamic frequency selection], especially indoors. Wi-Fi 6E is not going to be faster than Wi-Fi 6 and it’s not adding enhanced technology features. The neat thing is operating the 6GHz will require Wi-Fi 6 or above clients. So, we’re not going to have any slow clients and we’re not going to have a lot of noise.

AI Tracks Seizures In Real Time

In brain science, the current understanding of most seizures is that they occur when normal brain activity is interrupted by a strong, sudden hyper-synchronized firing of a cluster of neurons. During a seizure, if a person is hooked up to an electroencephalograph—a device known as an EEG that measures electrical output—the abnormal brain activity is presented as amplified spike-and-wave discharges. “But the seizure detection accuracy is not that good when temporal EEG signals are used,” Bomela says. The team developed a network inference technique to facilitate detection of a seizure and pinpoint its location with improved accuracy. During an EEG session, a person has electrodes attached to different spots on their head, each recording electrical activity around that spot. “We treated EEG electrodes as nodes of a network. Using the recordings (time-series data) from each node, we developed a data-driven approach to infer time-varying connections in the network or relationships between nodes,” Bomela says. Instead of looking solely at the EEG data—the peaks and strengths of individual signals—the network technique considers relationships.

How to Calculate ROI on Infrastructure Automation

The equation is simple. You have a long, manual process. You figure out a way to automate it. Ta-da! What once took two hours now takes two minutes. And you save sweet 118 minutes. If you run this lovely piece of automation very frequently, the value is multiplied. Saving 118 minutes 10 times a day is very significant. Like magic. ... Back to the value formula. In real life, there are more facets to this formula. One of the factors that affect the value you get from automation is how many people have access to it. You can automate something that can potentially run 2,000 times a day, every day; this could be a game-changer in terms of value. But if this is something that 2,000 different people need to do, there is also the question of how accessible your automation is. Getting your automation to run smoothly by other people is not always a piece of cake (“What’s your problem?! It’s in git! Yes, you just get it from there. I’ll send you the link. You don’t have a user? Get a user! You can’t run it? Of course, you can’t, you need a runtime. Just get the runtime. It’s all in the readme! Oh, wait, the version is not in the readme. Get 3.0, it only works with 3.0. Oh, and you edited the config file, right?”).

The most in-demand IT staff companies want to hire

Companies want people who are good communicators and who will be proactive--for example, quickly addressing a support ticket that comes in in the morning, so users don't have to wait, Wallenberg added. In terms of security hiring trends, "there have always been really brilliant people who can sell the need for security to the business,'' and that is needed now more than ever in IT, he said. "In a perfect world, it shouldn't have taken high-profile breaches of personal and identifiable information for companies to wake up and say we need to invest more money in it. So security leadership and, further down the pole, they have to sell their vision on steps they need to take to more systematically ensure systems are safe and companies are protected from threats." Because of the current climate, it is also critical that companies are prepared to handle remote onboarding of new tech team members, Wallenberg said. "Companies that adopted a cloud-first strategy years ago are in a much better position to onboard [new staff] than people who need an office network to connect,'' he said. 

An enterprise architect's guide to the data modeling process

Conceptual modeling in the process is normally based on the relationship between application components. The model assigns a set of properties for each component, which will then define the data relationships. These components can include things like organizations, people, facilities, products and application services. The definitions of these components should identify business relationships. For example, a product ships from a warehouse, and then to a retail store. An effective conceptual data model diligently traces the flow of these goods, orders and payments between the various software systems the company uses. Conceptual models are sometimes translated directly into physical database models. However, when data structures are complex, it's worth creating a logical model that sits in between. It populates the conceptual model with the specific parametric data that will, eventually, become the physical model. In the logical modeling step, create unique identifiers that define each component's property and the scope of the data fields.

Microsoft's ZeRO-2 Speeds up AI Training 10x

Recent trends in NLP research have seen improved accuracy from larger models trained on larger datasets. OpenAI have proposed a set of "scaling laws" showing that model accuracy has a power-law relation with model size, and recently tested this idea by creating the GPT-3 model which has 175 billion parameters. Because these models are simply too large to fit in the memory of a single GPU, training them requires a cluster of machines and model-parallel training techniques that distribute the parameters across the cluster. There are several open-source frameworks available that implement efficient model parallelism, including GPipe and NVIDIA's Megatron, but these have sub-linear speedup due to the overhead of communication between cluster nodes, and using the frameworks often requires model refactoring. ZeRO-2 reduces the memory needed for training using three strategies: reducing model state memory requirements, offloading layer activations to the CPU, and reducing memory fragmentation. 

The unexpected future of medicine

Along with robots, drones are being enlisted as a way of stopping the person-to-person spread of coronavirus. Deliveries made by drone rather than by truck, for example, remove the need for a human driver who may inadvertently spread the virus. A number of governments have already drafted drones in to help with distributing PPE to hospitals in need of kit: in the UK, a trial of drones taking equipment from Hampshire to the Isle of Wight was brought forward following the COVID-19 outbreak. In Ghana, drones have also been put to work collecting patients samples for coronavirus testing, bringing the tests from rural areas into hospitals in more populous regions for testing. Meanwhile, in several countries, drones are also being used to drop off medicine to people in remote communities or those who are sheltering in place. Drones have also been used to disinfect outdoor markets and other areas to slow the spread of the disease. And in South Korea, drones have been drafted in to celebrate healthcare workers and spread public health messages, such as reminding people to continue wearing masks and washing their hands.

Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - July 07, 2020

Taking Steps To Boost Automated Cloud Governance

Lippis says cloud providers often talk about a shared responsibility model where the users take active roles in the process. The trouble is that the feedback and communication organizations receive is not always clear. He compared cloud providers to landlords who maintain and upgrade apartment buildings with the users as the tenants. Updating the property is the landlord’s responsibility. However, some cloud providers do not always provide much information about what is being changed and upgraded, Lippis says. Such breakdowns in communication and control could throw the enterprises out of compliance, he says, which they might not be known until an audit is conducted. There is a need for better transparency, Lippis says, so organizations know what is happening when changes are made, or events occur. This is can be of particular concern when organizations adopt multicloud approaches, matching workloads to different cloud providers. Security questions may arise because each cloud provider might communicate information to users in varied ways. “It could be the same kind of event, but they’re all coded differently,” Lippis says. “The syntax is different.”

Applying the 80-20 Rule to Cybersecurity

According to Mike Gentile — president and CEO at CISOSHARE and someone who has worked as a chief information security officer for many years — a lot has changed in the security space by 2020, but two things remain the same: Senior executives don't prioritize cybersecurity enough for security programs to be fully effective; and The reason for point No. 1 is not that executives don't care — they do, and they don't want their name in the headlines after a breach — but that they lack a clear definition of security. Each organization's unique definition of security should be set forth in a security charter document, which prescribes a mission and mandate for the security program as well as governance structures and clarified roles or responsibilities. More specifically, the charter defines how and where the security organization reports and answers questions such as: Should the business have a CISO, and should the position report to IT or to the CEO? Typically, a consultant's answer would be "It depends." But don't let that end the discussion: For any one business, there is one right answer. 

Talking Digital Future: Artificial Intelligence

This topic is especially cool in the healthcare domain. Think about how medicine works today. Medical practitioners go to school for many, many years, memorize a lot of information, then treat patients, get experience, and over the span of their career, become quite good at what they do. However, they are ultimately subject to the weaknesses of their own mortal existence. They can forget things; they can be absent-minded or, you know, just not connect the dots sometimes. Now, if we can equip a doctor physician with a computer to improve memory, options and optimization, the tools and the ability to provide medical aid suddenly change. Let’s look at IBM’s AI initiative Watson combined with an oncologist treating a cancer patient, for example. Each patient is different, so the doctor wants to have as many details as possible about this type of cancer and the patient’s medical history to make the best treatment plan. An AI-augmented device produced for the doctor’s team could generate a scenario based on the data of every patient that has had this particular set of circumstances and that person’s characteristics.

How Agile Turns Risk Into Opportunity

Changing the way large numbers of people in a corporation think is a monumental undertaking. It doesn’t come easily or quickly. But what is the alternative? Firms not operating in this way have been struggling, even in normal times, and they are steadily going out of business, exactly as Nokia was forced out of the phone business despite its massive wealth and large market share. Nokia didn’t change in 2010 because of a crisis or because it wanted to: it had to change because its phone business was bankrupt, even though it had been the dominant phone firm in the whole wide world, only three years before. That kind of story is now playing out, in sector after sector all around the world. As a result, there is now huge interest, even in large corporations, to find out what’s involved and learn how to think differently. ... But today, for most people, these changes make life quicker, simpler, more convenient, and, let’s face it: generally better. And people have responded with their wallets. The firms that provide these services have earned their profits and their stratospheric valuations. They have changed our lives fundamentally.

With eCommerce on the rise, tokenization is the ticket to taming fraud

With more merchants and retailers adopting tokenization technology, Visa is scaling our credential-on-file tokenization efforts. Since our first merchant began processing card-on-file tokens in 2017, we have seen more than 13,000 merchants start transacting with Visa tokens. In addition to enhancing security, tokenization also helps reduce friction in the payment process, because customers do not have to manually update stored card information if their Visa card is lost, stolen or expires. Instead, financial institutions can automatically update expired or compromised payment credentials. This can reduce missed payments for merchants, and help consumers avoid unwanted late payment fees or charges. Looking ahead, we are unveiling Token ID, a new solution stemming from our acquisition of the Rambus Payments token services business that expands Visa’s tokenization across all global and domestic networks, as well as tokenizing beyond card use cases. In addition, we are looking for ways to centralize and simplify token management through integration with our CyberSource platform to help to secure customer payment data, improve payment conversions and ease PCI compliance implications. 

Debunking the Myths about Artificial Intelligence

Organizations should not look for decades of experience in any given field of science if the entire organization is new to that field. Culture will eat those kinds of unconscious attempts. First, we need advocates to focus on people, character, and talent, not tech per se. Transformation starts at the individual level. In response, you are right to say that “Speed is important.” but that consideration is due to the fact that you feel FOMO, organizational isomorphism, and speed hunger as a result of digital disruption. When organizations see the AI show-offs by disruptors, they impatiently consider it as an overnight success/fail. Once you build the foundation for an appropriate digital culture, you can first elaborate on leaner-faster-better AI initiatives. Finally, among the 5W1H questions about AI, “Why” and “How” are critical instead of “What.” We should not directly rush into learning the new digital technologies. Rather, we should focus on “Why” and “How” those technologies popped up nowadays, not a decade ago, though they were there for decades in the literature.

Remote workers aren't taking security seriously. Now that has to change

Darren Fields, VP of Networking EMEA at Citrix, told TechRepublic: "The rapid shift to working from home has created the conditions for shadow IT to become an increasingly important issue. Whilst it is understandable that employees needed to adapt quickly to new pressures and concerns, given the global pandemic, it is important that businesses tighten up on these procedures going forward in order to safeguard their organisation from external threats." Citrix isn't the only organization to have spotted this trend: a recent study from Trend Micro also found people showing a lax attitude to following their company's IT security policies, with 56% of respondents admitted to using a non-work application on a work device and a third of respondents saying they did not give much thought to whether the apps they use are approved by IT or not. Earlier research also commissioned by Citrix found that seven in 10 respondents were concerned about information security as a result of employees using shadow IT or unsanctioned software, with three in five seeing shadow IT as a significant risk to their organisation's data compliance.

Smarter spending can accelerate Covid-19 recovery and renewa

Decision makers must not fear spending unless it is done on the wrong things. Prioritise and accelerate income-generated activities, whilst carefully reassessing the risk of business activities that rely on consumer presence and human interaction, considering the safety of staff and customers. Business activities that aren’t delivering value, either as revenue or investment, should be deprioritised.  ... Openly discuss emotions and their power to obstruct recovery. When problems arise, work through diagnostics calmly, utilising the information gathered to earn revenue in the new situations. Although we can’t use past data to predict the future with certainty, we can take advantage of early indicators of revenue recovery. Actively seek out more useful data, but be wary of confirmation bias — interpreting data as a validation of preconceived ideas. ... Confront preconceptions in a challenging market. Communicate clearer business vision to overcome emotional reactions, adapting to find the right balance between positive affirmation and realistic expectations. Inform investors and suppliers of business expectations, building confidence that you’re best able to manage the risks through innovation.

How to select the right IoT database architecture

Static databases, also known as batch databases, manage data at rest. Data that users need to access resides as stored data managed by a database management system (DBMS). Users make queries and receive responses from the DBMS, which typically, but not always, uses SQL. A streaming database handles data in motion. Data constantly streams through the database, with a continuous series of posed queries, typically in a language specific to the streaming database. The streaming database's output may ultimately be stored elsewhere, such as in the cloud, and accessed via standard query mechanisms. Streaming databases are typically distributed to handle the scale and load requirements of vast volumes of data. Currently, there are a range of commercial, proprietary and open source streaming databases, including Google Cloud Dataflow, Microsoft StreamInsight, Azure Stream Analytics, IBM InfoSphere Streams and Amazon Kinesis. Open source systems are largely based around Apache and include Apache Spark Streaming provided by Databricks, Apache Flink provided by Data Artisans, Apache Kafka provided by Confluent and Apache Storm, which is owned by Twitter.

11 Patterns to Secure Microservice Architectures

Third-party dependencies make up 80% of the code you deploy to production. Many of the libraries we use to develop software depend on other libraries. Transitive dependencies lead to a large chain of dependencies, some of which might have security vulnerabilities. You can use a scanning program on your source code repository to identify vulnerable dependencies. You should scan for vulnerabilities in your deployment pipeline, in your primary line of code, in released versions of code, and in new code contributions. ... You should use HTTPS everywhere, even for static sites. If you have an HTTP connection, change it to an HTTPS one. Make sure all aspects of your workflow—from Maven repositories to XSDs—refer to HTTPS URIs. HTTPS has an official name: Transport Layer Security. It’s designed to ensure privacy and data integrity between computer applications. How HTTPS Works is an excellent site for learning more about HTTPS. ... OAuth 2.0 has provided delegated authorization since 2012. OpenID Connect added federated identity on top of OAuth 2.0 in 2014. Together, they offer a standard spec you can write code against and have confidence that it will work across IdPs.

Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - July 06, 2020

Benefits of RPA: RPA Best Practices for successful digital transformation

A main benefit of RPA solutions is that they reduce human error while enabling employees to feel more human by engaging in conversations and assignments that are more complex but could also be more rewarding. For instance, instead of having a contact center associate enter information while also speaking with a customer, an RPA solution can automatically collect, upload, or sync data into with other systems for the associate to approve while focusing on forming an emotional connection with the customer. Another impact of RPA is it can facilitate and streamline employee onboarding and training. An RPA tool, for instance, can pre-populate forms with the new hire’s name, address, and other key data from the resume and job application form, saving the employee time. For training, RPA can conduct and capture data from training simulations, allowing a global organization to ensure all employees receive the same information in a customized and efficient manner. RPA is not for every department and it’s certainly not a panacea for retention and engagement problems. But by thinking carefully about the benefits that it offers to employees, RPA can transform workflows—making employees’ jobs less robotic and more rewarding.

Hey Alexa. Is This My Voice Or a Recording?

The idea is to quickly detect whether a command given to a device is live or is prerecorded. It's a tricky proposition given that a recorded voice has characteristics similar to a live one. "Such attacks are known as one of the easiest to perform as it simply involves recording a victim's voice," says Hyoungshick Kim, a visiting scientist to CSIRO. "This means that not only is it easy to get away with such an attack, it's also very difficult for a victim to work out what's happened." The impacts can range from using someone else's credit card details to make purchases, controlling connected devices such as smart appliances and accessing personal data such home addresses and financial data, he says. The voice-spoofing problem has been tackled by other research teams, which have come up with solutions. In 2017, 49 research teams submitted research for the ASVspoof 2017 Challenge, a project aimed at developing countermeasures for automatic speaker verification spoofing. The ASV competition produced one technology that had a low error rate compared to the others, but it was computationally expensive and complex, according to Void's research paper.

Reduce these forms of AI bias from devs and testers

Cognitive bias means that individuals think subjectively, rather than objectively, and therefore influence the design of the product they're creating. Humans filter information through their unique experience, knowledge and opinions. Development teams cannot eliminate cognitive bias in software, but they can manage it. Let's look at the biases that most frequently affect quality, and where they appear in the software development lifecycle. Use the suggested approaches to overcome cognitive biases, including AI bias, and limit their effect on software users. A person knowledgeable about a topic finds it difficult to discuss from a neutral perspective. The more the person knows, the harder neutrality becomes. That bias manifests within software development teams when experienced or exceptional team members believe that they have the best solution. Infuse the team with new members to offset some of the bias that occurs with subject matter experts. Cognitive bias often begins in backlog refinement. Preconceived notions about application design can affect team members' critical thinking. During sprint planning, teams can fall into the planning fallacy: underestimating the actual time necessary to complete a user story.

Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud

A different approach to bridging the worlds of on-prem data centers and the growing variety of cloud computing services is offered by a company called Alluxio. From their roots at the Berkeley Amp Labs, they've been focused on solving this problem. Alluxio decided to bring the data to computing in a different way. Essentially, the technology provides an in-memory cache that nestles between cloud and on-prem environments. Think of it like a new spin on data virtualization, one that leverages an array of cloud-era advances. According to Alex Ma, director of solutions engineering at Alluxio: "We provide three key innovations around data: locality, accessibility and elasticity. This combination allows you to run hybrid cloud solutions where your data still lives in your data lake." The key, he said, is that "you can burst to the cloud for scalable analytics and machine-learning workloads where the applications have seamless access to the data and can use it as if it were local--all without having to manually orchestrate the movement or copying of that data."

Redis and open source succession planning

Speaking of the intersection of open source software development and cloud services, open source luminary Tim Bray has said, “The qualities that make people great at carving high-value software out of nothingness aren’t necessarily the ones that make them good at operations.” The same can be said of maintaining open source projects. Just because you’re an amazing software developer doesn’t mean you’ll be a great software maintainer, and vice versa. Perhaps more pertinently to the Sanfilippo example, developers may be good at both, yet not be interested in both. (By all accounts Sanfilippo has been a great maintainer, though he’s the first to say he could become a bottleneck because he liked to do much of the work himself rather than relying on others.) Sanfilippo has given open source communities a great example of how to think about “career” progression within these projects, but the same principle applies within enterprises. Some developers will thrive as managers (of people or of their code), but not all. As such, we need more companies to carve out non-management tracks for their best engineers, so developers can progress their career without leaving the code they love. 

How data science delivers value in a post-pandemic world

The uptick in the need for data science, across industries, comes with the need for data science teams. While hiring may have slowed down in the tech sector – Google slowed its hiring efforts during the pandemic – data scientists professionals are still in high demand. However, it’s important to keep a close eye on how these teams continue to evolve. One position which is increasingly in-demand as businesses become more data-driven is the role of the Algorithm Translator. This person is responsible for translating business problems into data problems and, once the data answer is found, articulating this back into an actionable solution for business leaders to apply. The Algorithm Translator must first break down the problem statement into use cases, connect these use cases with the appropriate data set, and understand any limitations on the data sources so the problem is ready to be solved with data analytics. Then, in order to translate the data answer into a business solution, the Algorithm Translator must stitch the insights from the individual use cases together to create a digestible data story that non-technical team members can put into action.

Open source contributions face friction over company IP

Why the change? Companies that have established open source programs say the most important factor is developer recruitment. "We want to have a good reputation in the open source world overall, because we're hiring technical talent," said Bloomberg's Fleming. "When developers consider working for us, we want other people in the community to say 'They've been really contributing a lot to our community the last couple years, and their patches are always really good and they provide great feedback -- that sounds like a great idea, go get a job there.'" While companies whose developers contribute code to open source produce that code on company time, the company also benefits from the labor of all the other organizations that contribute to the codebase. Making code public also forces engineers to adhere more strictly to best practices than if it were kept under wraps and helps novice developers get used to seeing clean code.

How Ekans Ransomware Targets Industrial Control Systems

The Ekans ransomware begins the attack by attempting to confirm its target. This is achieved by resolving the domain of the targeted organization and comparing this resolved domain to a specific list of IP addresses that have been preprogrammed, the researchers note. If the domain doesn't match the IP list, the ransomware aborts the attack. "If the domain/IP is not available, the routine exits," the researchers add. If the ransomware does find a match between the targeted domain and the list of approved IP addresses, Ekans then infects the domain controller on the network and runs commands to isolate the infected system by disabling the firewall, according to the report. The malware then identifies and kills running processes and deletes the shadow copies of files, which makes recovering them more difficult, Hunter and Gutierrez note. In the file stage of the attack, the malware uses RSA-based encryption to lock the target organization's data and files. It also displays a ransom note demanding an undisclosed amount in exchange for decrypting the files. If the victim fails to respond within first 48 hours, the attackers then threaten to publish their data, according to the Ekans ransom recovered by the FortiGuard researchers.

The best SSDs of 2020: Supersized 8TB SSDs are here, and they're amazing

If performance is paramount and price is no object, Intel’s Optane SSD 905P is the best SSD you can buy, full stop—though the 8TB Sabrent Rocket Q NVMe SSD discussed above is a strong contender if you need big capacities and big-time performance. Intel’s Optane drive doesn’t use traditional NAND technology like other SSDs; instead, it’s built around the futuristic 3D Xpoint technology developed by Micron and Intel. Hit that link if you want a tech deep-dive, but in practical terms, the Optane SSD 900P absolutely plows through our storage benchmarks and carries a ridiculous 8,750TBW (terabytes written) rating, compared to the roughly 200TBW offered by many NAND SSDs. If that holds true, this blazing-fast drive is basically immortal—and it looks damned good, too. But you pay for the privilege of bleeding edge performance. Intel’s Optane SSD 905P costs $600 for a 480GB version and $1,250 for a 1.5TB model, with several additional options available in both the U.2 and PCI-E add-in-card form factors. That’s significantly more expensive than even NVMe SSDs—and like those, the benefits of Intel’s SSD will be most obvious to people who move large amounts of data around regularly.

SRE: A Human Approach to Systems

Failure will happen, incidents will occur, and SLOs will be breached. These things may be difficult to face, but part of adopting SRE is to acknowledge that they are the norm. Systems are made by humans, and humans are imperfect. What’s important is learning from these failures and celebrating the opportunity to grow. One way to foster this culture is to prioritize psychological safety in the workplace. The power of safety is very obvious but often overlooked. Industry thought leaders like Gene Kim have been promoting the importance of feeling safe to fail. He addresses the issue of psychological insecurity in his novel, “The Unicorn Project.” Main character Maxine has been shunted from a highly-functional team to Project Phoenix, where mistakes are punishable by firing. Gene writes “She’s [Maxine] seen the corrosive effects that a culture of fear creates, where mistakes are routinely punished and scapegoats fired. Punishing failure and ‘shooting the messenger’ only cause people to hide their mistakes, and eventually, all desire to innovate is completely extinguished.”

Quote for the day:

"Education: the path from cocky ignorance to miserable uncertainty." -- Mark Twain

Daily Tech Digest - July 05, 2020

How Cryptocurrency Funds Work

This is generally the largest risk involved with investing in a cryptocurrency fund: clients need to put their trust into those behind it, which is why it is important to do research. The more information the managers are willing to share about who they are, how they are managing and what their track record is can help determine if they are right for an investor. That’s why, for many, partnering with a reputable firm is an essential part of the trust that they will see a return on their investment. Some of the biggest names in cryptocurrency funds include the Digital Currency Group, Galaxy Digital and Pantera Capital, among many others. All focus specifically on cryptocurrencies and other digital assets. Of course, these will still generally require large, upfront investments from qualified individuals. However, retail investors who want to be in on this type of action might want to look at projects like Tokenbox. In addition to acting as a general wallet and exchange, Tokenbox allows users to “tokenize” their portfolios as well as invest in the tokens attached to the portfolios of others. This acts as a streamlined way to either begin a new cryptocurrency fund or get involved in an existing one.

How DevOps teams can get more from open source tools

Open source tools can be a key first step on the DevOps path to achieving software development’s nirvana state, but only when teams bring automation and speed across the various steps of the process. That’s why professionals refer to a DevOps “toolchain” (the products you use) that supports the software “pipeline” (the process of delivering software) — and visually depict these elements as unfolding in a horizontal fashion. End-to-end tool coverage horizontally across an organization is the key to highly functional, mature DevOps practices. However, that’s easier said than done — and has traditionally been both expensive and difficult for businesses to do. The good news today is that there are many more open-source options across every sequential step of the software delivery lifecycle (SDLC). From managing source code, to storing build artifacts, release monitoring and finally to deployment — there’s an OSS solution for that if you know where to look. ... Perhaps less obvious is the notion that DevOps teams must think about tool coverage and instrumentation for a vertical stack, which at a basic level breaks down into code, infrastructure, and data layers.

5G reinvented: The longer, rougher road toward ubiquity

There are two 5Gs, and that is by design. The architecture that purges the network of all radio and communications components and methods from the past, while maintaining compatibility with older devices (user equipment, or UE) is called 5G Stand-Alone (5G SA). Release 16 of the 3GPP engineers' architecture for global wireless communications, is being formally ratified and finalized on July 3. It was delayed on account of the pandemic, but only by a handful of months. 3GPP R16 is the second round of 5G technologies, in a series that has at least one more round devoted to 5G, most likely two. The other 5G architecture is the one in use today in the United States: 5G Non-Stand Alone (5G NSA). It relies on the underlying foundation and existing base station structure of 4G LTE. By building 5G services and service levels literally into crowns that reside above or below the 4G buildouts (a "crown castle," which also happens to be the name of one of North America's largest owners of telco tower real estate), 4G has been giving 5G a leg up. Once it's found its footing, the idea is that 4G can begin winding down.

Robotic Process Automation: 6 common misconceptions

RPA is best for activities that require multiple repetitions of the same sequence and could be conducted in parallel to create greater efficiencies. For example, B2B companies often have to check several portals or suppliers in order to buy inventory at the best rate. An employee would have to work through all the steps in each portal sequentially. But with RPA, the software robots act as “digital colleagues”. They monitor product prices and regularly inform employees about changes, retrieving figures from all portals simultaneously. Unlike BPM platforms, RPA isn’t capable of managing processes end-to-end over a longer period of time. An example: A customer wants to order something, complain or obtain information. Accordingly, a process is triggered in the company. Sometimes it can take up to 14 days until the request is completed. Although the digital colleague can support the employee by retrieving data on the customer, decisions are still made by the individual. That’s why a BPM solution is the much better choice, because the system can integrate employees into the process depending on availability and skills.

Remote workforce demands ‘hybrid working’, not the end of the office in the ‘better normal’

The study revealed universal approval of flexible working, across business structures and geographies, across generations and parental status. This, said Adecco, was a clear affirmation that the world is ready for hybrid working. Almost 80% of respondents thought it important that their company implements more flexibility in how and where staff can work. And it was not only employees who saw the benefits of this. Just over three-quarters (77%) of C-level/executive managers thought business will generally benefit from allowing increased flexibility around office and remote working. Also, 79% of C-level/executive management said they thought employees would benefit personally from having increased flexibility around office and remote working. Four-fifth of workers said it was important to be able to maintain a good work/life balance after the pandemic, and 50% said their work/life balance had improved during the lockdown. However, UK employees worry that their employer’s expectation of what hybrid working should look like after the pandemic will not match their own. 

UNICEF turning to cryptocurrency in fight against Covid-19

The CryptoFund is aimed at supplementing this initiative to help companies specifically address the challenges created by the Covid-19 pandemic, which has brought to a head the problems that UNICEF’s funds are seeking to tackle, such as food supply and education. Investees have sought to mitigate some of the damaging effects of the pandemic on children through collaboration with governments and other local organisations in tracking delivery of food, offering remote learning and tending to other problems caused by lockdown and isolation. Among the companies receiving 125 ether are StaTwig from India which is piloting a blockchain-based app designed to track the delivery of rice to impoverished communities and Utopic from Chile which aims to help improve children’s literacy from their homes using a WebVR-powered learning game. “We’re making investments into emerging technologies across data science, virtual reality and blockchain,” says Lamazzo, “but we’re also looking at the modality of the funding with the startups and trying to understand its benefits and drawbacks, so we’re going through this learning process together.”

How CTOs Can Innovate Through Disruption in 2020

Disruption is nothing new for technology leaders. In Gartner's survey of IT leaders, conducted in early 2020 before the coronavirus pandemic struck, 90% said they had faced a "turn" or disruption in the last 4 years, and 100% said they face ongoing disruption and uncertainty. The current crisis may just be the biggest test of the resiliency they have developed in response to those challenges. "We are hearing from a lot of clients about innovation budgets being slashed, but it's really important not to throw innovation out the window," said Gartner senior principal analyst Samantha Searle, one of the report's authors, who spoke to InformationWeek. "Innovation techniques are well-suited to reducing uncertainty. This is critical in a crisis." The impact of the crisis on your technology budget is likely dependent on your industry, Searle said. For instance, technology and financial companies tend to be farther ahead of other companies when it comes to response to the crisis and consideration of investments for the future. Other businesses, such as retail and hospitality, just now may be considering how to reopen.

Shadow IT: It's Nothing Personal

One of the things I still hear a lot from IT leaders, from small companies to large corporations, is that shadow IT is a big issue that causes them headaches. If you are not familiar with the term, shadow IT is a description of when departments go outside of an IT department to obtain products or services traditionally controlled by a centralized IT group, such as obtaining software-as-a-service or obtaining devices. IT leaders bemoan the behavior that is causing departments to “go around IT” or “not follow the rules,” and often take the position that it’s simply bad behavior or some kind of vendetta against IT. More often than not, however, they fail to internalize and analyze the real cause of the phenomenon: It’s easier/cheaper/better to do business with other organizations. On occasion, they even get upset when I make this suggestion — at least until they stop and think carefully about what I’ve said. This is nothing personal. Departments, when trying to accomplish their essential business purpose, are, frankly, obligated to look for the best competitive solutions. It’s solely about doing smart business.

Shining a Low-Code Light on Shadow IT

Shadow apps are not, in themselves, a bad thing. Many of these systems fulfill a valid need and play a role in the success and or survival of the organisation. Some IT departments are now openly recognising this and seeking to bring the alleged ‘rebels’ back into the IT fold. What IT really needsto achieve this, is a technology approach that helps them deliver on these requirements at speed; technology that means that they no longer have to say ‘no’ or ‘yes, but later’ in response to requests from the business. Enabling IT to be agile by using ‘low-code’ rapid application development tools to build apps at high speed, can overcome the bottlenecks. So instead of outlawing Shadow IT ideas, this new approach recognizes and utilizes their creativity. Low-code platforms, such as those offered by LANSA, provide the kind of prototyping capabilities needed to validate business needs, direct with the users, iterate as they formalize their requirements, then speed up the final development way beyond the timescales they have been used to. The resulting apps are robust, well architected, high performance, and, importantly, managed and easily maintained by IT. 

Robotic Process Automation in legal - a bright future

If we are going to be precise, we should put AI as a subset of robotic process automation. Artificial intelligence is frequently associated with robots and bots in the broader sense. But it is commonly misunderstood by the public, and, by extension, lawyers. At times, it's likely over-glorified by legal tech companies, pundits, and publications. John McCarthy created the term AI in 1956. He used it to label machines that mirror certain human cognitive traits (i.e., learning, thinking, remembering, problem-solving, and making decisions). In essence, artificial intelligence represents machines (algorithms) that can analyze vast bodies of data, learn, and correct their behavior in the process. As such, artificial intelligence depends quite a lot on the quality of data. You can't have good learning if data is sparse, or if samples aren't representative. So far, the necessity of training and data quality (or its availability in the first place) represented significant barriers to the adoption of AI in the legal industry.

Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead" -- John Paul Warren

Daily Tech Digest - July 04, 2020

What are IT pros concerned about in the new normal? Security and flexibility

What's also interesting is, despite this workload increase, the majority (77%) feel they have been very effective at supporting employees working from home. This is great to hear, and not entirely surprising, as these companies rely on SaaS to run their businesses. On the flip side, laggards running legacy infrastructures have seen productivity go to zero. This is definitely a tipping point for the adoption of SaaS. Our survey also reinforces this sentiment, as 47 percent of respondents said they will increase the use of SaaS as a result of the pandemic. ... IT teams at every company we work with have had to implement new processes to support the entire employee base, leveraging and adjusting methods, tools and processes to enable business continuity with nearly 100% work-from-home workforce. Work from home is not a new concept, but supporting traditional remote laptop users is not the same challenge as supporting desktop users that may not be using corporate-issued devices and computers. Companies were forced to immediately implement new processes for the entire employee base, leveraging methods that were effective for laptop users who were already effective remote users.

Singapore banks set to fast-track digital transformation due to COVID-19

As banks re-evaluate their digital strategies, it only makes sense to ensure compliance is automated in order to easily and efficiently adhere to all AML, KYC and CTF regulations. Regulation technologies, which use Artificial Intelligence (AI), are particularly valuable when it comes to automating compliance. AI can help mine huge volumes of data, automatically flagging risk-relevant facts faster than humanly possible. AI technology dramatically speeds up the onboarding phase. The technology helps to automatically identify illicit client relationships and alert financial institutions to the possibility of criminal or terrorist activity. With regulatory requirements being constantly updated, it can be difficult for banks to keep on top of these changes via manual processes alone. By implementing AI technology, financial institutions are better able to identify gaps in customer information, with the technology automatically prompting them to perform regulatory outreach to collect the outstanding information – a far more streamlined and hands-off approach to what many banks in Singapore are currently using.

Top ten myths of technology modernization in insurance

Modernization simply means replacing the core platform with the best-in-class option. The reality: Core-platform replacements often have higher up-front investment costs than in-place IT modernization, as they require both software and hardware, experts’ time, and extensive testing. Furthermore, migrating existing policies and their implicit contracts to a new platform is often expensive—these additional costs need to be factored into any decision—and time consuming. One big reason for high modernization costs is the age and quality of the policy data and rules—poorly maintained policies are expensive to refresh and modernize to work in a new system. Product types and geographic context are also considerations. For instance, US personal property and casualty (P&C) policies are generally issued annually and thus have up-to-date policy data and rules; this makes migration efforts more straightforward. By contrast, in countries such as Austria or Germany, policies are refreshed annually to adjust premiums for inflation, but policy data, rules, and terms only change when a customer switches to a new policy—which may not happen for many years. Therefore, policy rules need to be carried over to the target system or customers need to switch to a new policy during modernization, rendering it time consuming.

Microsoft Defender ATP now rates your security configurations

Microsoft promises the data in the score card is the product of "meticulous and ongoing vulnerability discovery", which involves, for example, comparing collected configurations with collected benchmarks, and collecting best-practice benchmarks from vendors, security feeds, and internal research teams. Defender ATP users will see a list of recommendations based on what the scan finds. It contains the issue, such as whether a built-in administrator account has been disabled, the version of Windows 10 or Windows Server scanned, and a description of the potential risks. For this particular risk, Microsoft explains that the built-in administrator account is a favorite target for password-guessing, brute-force attacks and other techniques, generally after a security breach has already occurred. Defender ATP also provides the number of accounts exposed on the network and an impact score. Users can export a checklist of remediations to be undertaken in CSV format for sharing with team members and to ensure the measures are undertaken at the appropriate time. An organization's security score should improve once remediations are completed.

Working with Complex Data Models

Physical data models present an image of a data design that has been implemented, or is going to be implemented, in a database management system. It is a database-specific model representing relational data objects (columns, tables, primary and foreign keys), as well as their relationships. Also, physical data models can generate DDL (or data definition language) statements, which are then sent to the database server. Implementing a physical data model requires a good understanding of the characteristics and performance parameters of the database system. For example, when working with a relational database, it is necessary to understand how the columns, tables, and relationships between the columns and tables are organized. Regardless of the type of database (columnar, multidimensional, or some other type of database), understanding the specifics of the DBMS is crucial to integrating the model. According to Pascal Desmarets, Founder and CEO of Hackolade: “Historically, physical Data Modeling has been generally focused on the design of single relational databases, with DDL statements as the expected artifact. Those statements tended to be fairly generic, with fairly minor differences in functionality and SQL dialects between the different vendors. ...”

'Machine' examines Artificial Intelligence and asks, 'Are we screwed?'

These AI systems are trained on huge amounts of data and you'll find bias when, say there's facial recognition. If all your facial recognition data set is Caucasians, it's going to have trouble identifying people by their races. And being misidentified by facial recognition is not a good thing when it comes to law enforcement, other things like this. So, we're finding, even through the course of making the film, this technology moves so fast, but we've seen a lot being done to address the problem of bias in data sets since we started. And they're finding that more diversity within these data sets actually has helped reduce bias in a lot of these algorithms, which is a positive sign. But at the end of the day, I think we're still at the point where we don't want to give these algorithms too much control. I think there needs to be humans in the loop that understand ethics and not everything in life boils down to zeros and ones, and Xs and Os. So, I think it's good to have humans in the loop and also society in the loop, not just the people designing these technologies, but society as a whole should be hip to what's going on. Because if not, you're going to wake up in 20 years and going to be living in a very different world, I think.

Pandemic reveals opportunities for 5G connectivity

Because 5G technology can now be cloud orchestrated—that is, use software-defined principles to manage the interconnections and interactions among workloads on public and private cloud infrastructure—the behavior of the 5G network can be changed to accommodate specific applications for specific uses. Roese shared a dramatic example of this by describing a telehealth scenario in which suspected stroke victims could be diagnosed and receive initial treatment while en route to the hospital. This would be accomplished by using the continuous collection and streaming of patient data. “In order to do that, a whole bunch of conditions had to be true,” said Roese. “You had to push the code out to an edge, so it can operate in real time. You had to execute a network slice to guarantee the bandwidth and give this a priority service.” If such allocation were done manually, it might take three hours or more to reconfigure the network. One thing that makes mobile triage possible is strength at the edge of the cellular network. That is also crucial for innovation—as well as for the average 5G user. “What that means is you’re walking around in a city and if you constantly get 100-to-200 megabits per second, the peak rates might be five-to-10 gigabits per second,”

Design Patterns — Zero to Hero — Factory Pattern

Before moving into the explanation part we need to have a clear understanding of concrete class. A class that has an implementation for all of its methods is called Concrete class. They cannot have any unimplemented methods. The concrete class can extend the Abstract class or an interface as long as its implements all the methods of those too. Simply, we can say that all classes which are not Abstract class are considered as Concrete classes. Actually, according to Head First Design Patterns, Simple Factory is not considered as a Design Pattern. Let’s get started understanding the Factory Pattern varieties. The Simple Factory Pattern describes a way of instantiating class using a method with a large conditional that based on method parameters to choose which product class to instantiate and then return. Let’s dive into the coding example where the Simple Factory Pattern comes into play. Imagine a scenario, where we have different brands of smartphones. You need to take the specification details of the respective brands where the brand name is passed as a parameter through the client code.

Evolution of Voice-activated Banking

Instead of having to call up customer care representatives and waiting to get their queries resolved, consumers should be able to quickly get relevant information simply by asking. The financial services industry is addressing the one-click, on-the-go behaviour of consumers by launching various innovative solutions, such as mobile wallets, which have become a highly convenient method of payment; and chatbots, which have become very popular. Banks are constantly looking to enhance customer experience by providing ways to customers to get the desired information as and when they want. The opportunity lies in integrating all branch transactional activities with voice technology. Currently, voice assistants handle basic customer queries, such as checking account balances, making payments, paying bills and getting account-related information. The simple nature of these requests enables institutions to instantly provide the right information at the right time; however, this is unlikely to provide a competitive advantage in future. Companies that reimagine the customer journey across channels, products, and services with end-to-end integration, will emerge as winners.

Fintech In Banking: New Standards For The Financial Sector

Distributed leader technologies, widely known as blockchains, have already moved from the shade of public interest and now are treated as paradigm-changing technologies that turn the interaction between Fintech and banks upside down. The research held by Accenture shows that 9 in 10 executives are considering the implementation of blockchain technology into their financial services. Blockchain aims at boosting mutual benefits and reducing business risks from collaboration and mutual Fintech investment banking. Using a decentralized database, banks receive an opportunity to work together on a common solution, keeping their own data security and opening certain pieces of data only when they want to interact and trade. It ensures complete transparency and real-time execution of payments what significantly minimizes the possibility of cyber-attacks as the information doesn’t exist in a centralized database anymore. Blockchain technology is also very helpful in KYC (Know Your Customer) Compliance. In traditional banking, it usually causes delays to banking transactions, entails substantial duplication of effort between banks and third parties, and ends up at high costs. 

Quote for the day:

"When you expect the best from people, you will often see more in them than they see in themselves." -- Mark Miller