Daily Tech Digest - July 11, 2020

Software as a Service (SaaS): A cheat sheet

Beyond reliability, and depending on the nature of your business applications, it is also vitally important to evaluate the capacity provided by your chosen ISP. Querying large databases or moving large media files will require more bandwidth than is typical for less-intense applications like email; however, even extremely large bandwidth may not be enough, if there are also latency issues. There are similar reliability concerns when choosing the service provider for the SaaS applications themselves. Business organizations have to think about the longevity of their provider, their commitment to security, their willingness to customize applications, and their plans for feature upgrades. SaaS requires a business to relinquish some control in order to reap the benefits of the distribution system. Relinquishing control may also cause problems when the SaaS provider updates certain application features that the business does not want changed. Some feature upgrades will break existing use cases, especially if the business is using a customized version of the software. Some SaaS vendors have been known to eliminate aggregately under used features from their software, which causes problems for businesses that choose to adopt those features.


APT Group Targets Fintech Companies

Once the targeted victim clicks on the LNK file to view one of the documents, the malware begins to load in the background and infect their device, according to the report. Once the attackers successfully infect devices and a network, the malware steals sensitive corporate data, such as customer lists, credit card information and other personally identifiable data, along with the firm's investments and trading operations data, the ESET researchers report. In the next phase of the attack, the JavaScript components deploy other malware the Evilnum operators purchased from other hackers, including code written in C# from the malware-as-a-service provider Golden Chickens, the report notes. The attackers also use Python-based tools in their toolkits, the researchers add. While the JavaScript component acts as a backdoor and handles communications with the command-and-control server, the C# code takes on other tasks, including grabbing a screenshot whenever the mouse is moved over a certain length of time, sending system information back to the operators as well as stealing cookies and credentials. Eventually, this process will kill the malware when the campaign is complete, according to the report.


Why Segmentation is More Effective Than Firewalls For Securing Industrial IoT

As we’re so accustomed to using firewalls in our everyday lives (particularly on our own private computers, tablets, and smartphones) it might seem intuitive to use a firewall as a safeguard for IIoT-connected devices as well. However, the choice isn’t quite so straightforward as it might at first seem. Internal firewalls are expensive and complex to implement. It could be that for genuinely reliable protection, you need to install a firewall at every IIoT connection point. This could mean that hundreds (perhaps even thousands) of firewalls are required. We’ve already discussed how businesses’ technology security budgets are often overstretched. Taking this into account, security spend needs to be very carefully calculated and targeted. Segmentation, on the other hand, makes it possible to keep particular types of devices siloed off in a certain segment, thereby enhancing security. It also helps to enhance visibility and simplify classification of different device types. Organisations can then create risk profiles and relevant security policies for device groups.


How data and AI will shape the post-pandemic future

The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade, and it is becoming far more accepted that AI is something that can be trusted. It does have its limitations. It's not going to turn into the Terminator and take over the world. The fact that we are seeing AI more in our day-to-day lives means people are beginning to depend on the results of AI, at least from the understanding of the pandemic, but that drives that exception. When you start looking at how it will enable people to get back to somewhat of a normal existence―to go to the store more often, to be able to start traveling again, and to be able to return to the office―there is that dependency that Arti mentioned around video analytics to ensure social distancing or temperatures of people using thermal detection. All of that will allow people to move on with their lives and so AI will become more accepted. I think AI softens the blow of what some people might see as a civil liberty being eroded. It softens the blow of that in ways and says, "This is the benefit already and this is as far as it goes." So it at least forms discussions whenever it was formed before.


IoT: device management and security are crucial

Operational challenges abound from the beginning of the IoT journey to its end. For example, how do you efficiently roll out hundreds of thousands or even a million devices in a timely manner? Once up and running, device firmware and IoT application software will need to be updated – possibly multiple times – during the course of the device’s life. Additionally, the device should be monitored against established baselines. This creates the environment for an early warning system that can highlight possible software bugs or security exploits. Devices also may experience an “upgrade” during their life cycles, as new capabilities may be activated and enabled over-the-air, based on needs and business cases. Ownership changes require re-assignment of control, and at the end, devices need to be decommissioned and brought to end-of-life in an efficient manner. These development and deployment challenges are prompting companies to re-examine how they allocate resources more efficiently. For example, only 15% of overall IoT systems development time is IoT application development. But a full 30% is device-management issues (provisioning, onboarding, and updating devices and systems), while 40% is taken up by developing the device stacks.


More pre-installed malware has been found in budget US smartphones

While the app does function as an over-the-air updater for security fixes and as an updater to the operating system itself, the software also installs four variants of HiddenAds, a Trojan family found on Android handsets. HiddenAds is a strain of adware that bombards users with adverts. In order to verify where the malware originated from, Malwarebytes disabled WirelessUpdate and then re-enabled the app. Within 24 hours, four adware strains were covertly installed. As the malware on the UMX and ANS differ, the team wanted to see if there were any ties linking the brands. A common thread was the use of a digital certificate used to sign the ANS Settings app under the name teleepoch. Upon further investigation, the certificate was traced back to TeleEpoch Ltd, which is registered as UMX in the United States. "We have a Settings app found on an ANS UL40 with a digital certificate signed by a company that is a registered brand of UMX," Collier says. "That's two different Settings apps with two different malware variants on two different phone manufactures & models that appear to all tie back to TeleEpoch Ltd. ..."


Increasing demand for RegTech to Meet Regulatory Burden

The demand has grown exponentially high since the Global Financial Crisis of 2008, businesses need to comply with regulatory reforms related to Anti-Money Laundering (AML) and due diligence (KYC) requirements. The cost to comply with regulations was staggering, but the non-compliance costs more due to hefty amounts of fines. Digitization of regulatory compliance assists businesses in meeting the needs of regulation, that too, by cutting the cost. According to the study, the cost of compliance across all banks from 2014 to 2016 averaged approximately 7.0% of their noninterest expenses. RegTech startups are experiencing growth and investment as firms are realizing the need to capitalize on compliance efficiency. Businesses can use it for a competitive edge in the industry. There is great potential for powering the future of financial regulation by integrating RegTech. It has major implications as it provides reduced regulatory costs and improved operational efficiency. The main target of RegTech was the finance industry.


Why businesses are adopting AI to improve operations

AI has improved productivity in an array of sectors. AI-powered contact center software has allowed companies to become incredibly efficient. In a shop, a digital SKU system is far more efficient at keeping tabs on stock levels than a manual one. It can record and analyze the demand for certain articles. More will automatically get ordered. A fashion store can see when a garment is selling like hot cakes and get more before the trend runs its course. This maximizes profit on the item. For teleconferencing solutions or other software providers, one of the biggest problems faced is customer churn. Retention schemes try to contact as many customers as possible whose contract is due to run out. Discounts and other enticements are offered to remain. But some of those customers would have stayed anyway. Others, who were more likely to leave, may not have been contacted. Customer services can't get in touch with every single person whose contract is due to be up. What the firm needs to understand are the factors influencing people to stay or go. An AI program is able to analyze the data from thousands of customers. It works out the risk factors and pulls out a list of people most likely to leave.


10 Ways AI Is Improving New Product Development

From startups to enterprises racing to get new products launched, AI and machine learning (ML) are making solid contributions to accelerating new product development. There are 15,400 job positions for DevOps and product development engineers with AI and machine learning today on Indeed, LinkedIn and Monster combined. Capgemini predicts the size of the connected products market will range between $519B to $685B this year with AI and ML-enabled services revenue models becoming commonplace. Rapid advances in AI-based apps, products and services will also force the consolidation of the IoT platform market. The IoT platform providers concentrating on business challenges in vertical markets stand the best chance of surviving the coming IoT platform shakeout. As AI and ML get more ingrained in new product development, the IoT platforms and ecosystems supporting smarter, more connected products need to make plans now how they're going to keep up. Relying on technology alone, like many IoT platforms are today, isn't going to be enough to keep up with the pace of change coming.


CDO Leadership Skills That Matter

Persistence is a key trait of successful leaders—they don’t get demotivated too easily. Whereas some people retreat back to their caves after failed attempts to collaborate with the organization, choosing to focus only on internal marketing or just a few pilots, I find that leaders who are persistent have a seat at the strategic table with their peers, have a strategy, and have a roadmap. They’re constantly thinking through how their capabilities could be used across the organization. They’re not easily defeated when something doesn’t go right. Persistence is important because the failure rate of data strategies and data governance teams is high; you’re building in a function that you’re not consolidating under one person, one business function. You’re often using a distributed leadership and organization model, which takes hard work to set the right expectations and have ongoing communications. On a regular basis, you have to give different people the WIIFM, the goals and objectives, that apply to their particular situation, and try to drive adoption and change in a way that fits with how each team works.



Quote for the day:

"Humility is a great quality of leadership which derives respect and not just fear or hatred." -- Yousef Munayyer

Daily Tech Digest - July 10, 2020

SWOT analysis: Why you should perform one, especially during times of uncertainty

If your company is going to develop a sustainable advantage, it will need to first know where its strengths, weaknesses, opportunities, and threats exist. Without conducting a SWOT analysis, your company is flying blind and could be wasting precious resources and time on activities that propel it in the wrong direction. Conducting a SWOT analysis is particularly important during times of crisis and uncertainty. Since the COVID-19 pandemic began, many companies and industries have had to revisit their SWOT analysis as a result of internal and external factors outside of their control. As a result of the pandemic impact, industries like travel and tourism, restaurants, entertainment, and many others have been forced to devise ways to address new risks and reevaluate new opportunities. Conducting a SWOT analysis helps your leadership team gain a clear view of what your company is doing well compared with its competitors and where it needs to pull up its socks. It also helps shine a light on areas where potential opportunities exist and where risks may reside. Having a solid understanding of all of these areas identifies your current state and increases your company's visibility into how to best allocate its budget, resources, time, and effort. 


When WAFs Go Wrong

"Organizations want more from their WAF providers — and the degree of negative feedback from vendor-supplied references warns that, unless vendors adapt quickly, the WAF market is ripe for disruption," according to Sandy Carielli, principal analyst at Forrester Research, who led the firm's most recent market research on the WAF market this spring. The Forrester report shows that organizations are particularly struggling as their current WAF deployments are unable to handle a broader range of application attacks, particularly client-side attacks, API-based attacks, and bot-driven attacks. On the API (application programming interface) front, for example, an increasing number of server-side request forgery (SSRF) are made possible due to how cloud architectures use metadata APIs and webhooks. "The WAF may not necessarily be deployed in-line to monitor the outbound HTTP requests made by the web application. Many SaaS companies offer some form of web hook product which makes an http request on behalf of the user and cannot be easily differentiated from an SSRF attack," explained Jayant Shukla, CTO and co-founder of K2 Cyber Security


Overcoming Data Security Challenges in a Hybrid, Multicloud World

With each step, from IaaS to PaaS to SaaS to DBaaS, organizations give up some level of control over the systems that store, manage, distribute and protect their sensitive data. This increase in trust placed in third parties also presents an increase in risk to data security. Cloud deployments work on a shared responsibility model between the cloud provider and the consumer. In the case of an IaaS model, the cloud consumer has room to implement data security measures much like what they would normally deploy on premises and exercise tighter controls. For SaaS services, cloud consumers have to rely on the visibility provided by the cloud provider which, in essence, limits their ability to exercise more granular controls. It’s important to note that regardless of the chosen architecture, it’s ultimately your organization’s responsibility to ensure appropriate data security measures are in place across environments. To learn more about how to adapt your data security, data privacy and compliance practices to the hybrid multicloud, read the “Overcoming Data Security Challenges In a Hybrid Multicloud.”


Are Today’s Banks Prepared To Deploy Tomorrow’s Technologies?

While it is impossible to determine what the “new normal” in banking will look like, it will undoubtedly be far different than the past. It is still unknown how the negative financial impact of the pandemic on consumers will impact future banking behavior. While we have seen a spike in digital transactions and in the amount of savings set aside by consumers, it is too early to develop reliable trend lines going forward. There is little doubt that the banking industry will face a stretch of economic pressure created by delayed loan payments, lower fees, narrow margins and increased risk from credit losses. While government stimulus packages may help, there will still be capital and liquidity challenges. These financial challenges create a very clear call to action for financial institutions used to doing business the way it has been done for decades. Banks and credit unions must reimagine legacy business models and the technology used to serve the marketplace. Speed of change will determine winners as much as the changes themselves. Being a “fast follower” will no longer be acceptable.


Career advice for a changing world

For those growing up as digital natives, the principle of owning your network and profile may seem obvious. Everything we do will be captured digitally somehow — in both the professional and the social milieus. What you choose to post and how you present yourself matters: It is the foundation on which to build your network. The changing nature of work, including the fact that people may switch jobs frequently or be employed under a variety of types of agreements, will require the ability to present a compelling profile of who you are, and communicate this to your peers and potential collaborators. Here’s where your platform will find its outward presentation — where you can bundle your various talents, skills, aptitudes, and interests to present to prospective employers, mentors, and others you’ll work with or for. People at all stages of their career will need to do this, and as they add new abilities through upskilling, they add to the richness of their profile. You also need to build your network both digitally and physically (when that again becomes possible). If you are looking to change jobs, you should start by looking for ways to situate yourself among people who are already doing what you aspire to do, and build your new contacts.


Microsoft Teams' new 'Together mode' aims to make video calls more engaging

On most video calls, eye contact – or the lack of – is an ongoing problem, with people often appearing to look in the wrong direction. Together mode mimics the geometry of reflection, meaning that every participant is looking at the whole group through a big virtual mirror. “Once direct eye contact errors become hard to detect, people intuitively position themselves to look as if they are reacting to one another appropriately,” Lanier explains. Microsoft said its research has shown that as a result people tend to feel happier and more engaged in meetings. Additionally, everyone in Together mode is in a fixed position. If one person happens to appear in the fourth seat of the bottom row on their own screen, that person would appear in the fourth seat of the bottom row on everyone else’s screen. Angela Ashenden, principle analyst for workplace transformation at CCS Insight ,said the combination of both features helps to make the video meetings feel more natural. She notes that if a meeting leader tells everyone to click a button on the right of the screen, you see everyone’s gaze looking in the same direction.


Open source license issues stymie enterprise contributions

"The No. 1 issue [in enterprise open source] is still licensing," said Kevin Fleming, who oversees research and development teams in the office of the CTO at Bloomberg, a global finance, media and tech company based in New York. "But it isn't the licensing discussion that everybody was having five to 10 years ago -- now, the licensing discussion is about really important projects that enterprises depend upon deciding to switch to non-open source licenses." The legal outlook for enterprises has also been further complicated by varied approaches among vendors and open source foundations to copyright agreements, and a general lack of legal precedents to guide corporate counsel on open source IP issues. While Bloomberg's Fleming, and many other enterprise open source contributors, believes new license types such as the server side public license (SSPL) and the Hippocratic License clearly fall outside the bounds of open source, in the wider community, those aren't entirely settled questions. "Open source is bigger than licenses," said Coraline Ada Ehmke, software architect at Stitch Fix, creator of the Hippocratic License and founder of the Ethical Source Working Group.


Agile Initiative Planning with Roadmaps

Plans are critical because they set expectations on the goals, the strategy and the resources you need. They justify the organisation's expenditure on the initiative. They allow you to consider the problems you are likely to incur along the way and develop ways to avoid them. Plans build a bridge between management and the development team. With a plan, you can prepare for different eventualities to improve your chance of success. With a plan, you can get the commitment and resources you need to achieve your objective. Without a plan, it's unlikely that people will give you the funds or resources you need to succeed. Over the last few years, I have developed and refined an Initiative Roadmap process that allows you to define, design and plan an initiative in weeks instead of months or years. In an Initiative Roadmap, you set your goal, strategy and direction in a high-level plan so that you can get the necessary funding and support you need to build a delivery team. When the development team starts, they evolve the plan with business stakeholders to deliver the maximum business value possible within the time and budget available.


Google open-sources Tsunami vulnerability scanner

Google said it designed Tsunami to adapt to these extremely diverse and extremely large networks on the get-go, without the need to run different scanners for each device type. Google said it did this by first splitting Tsunami into two main parts, and then adding an extendable plugin mechanism on top. The first Tsunami component is the scanner itself -- or the reconnaissance module. This component scans a company's network for open ports. It then tests each port and attempts to identify the exact protocols and services running on each, in an attempt to prevent mislabelling ports and test devices for the wrong vulnerabilities. Google said the port fingerprinting module is based on the industry-tested nmap network mapping engine but also uses some custom code. The second component is the one that's more complex. This one runs based on the results of the first. It takes each device and its exposed ports, selects a list of vulnerabilities to test, and runs benign exploits to check if the device is vulnerable to attacks. The vulnerability verification module is also how Tsunami can be extended through plugins -- the means through which security teams can add new attack vectors and vulnerabilities to check inside their networks.


Up Close with Evilnum, the APT Group Behind the Malware

Evilnum's primary goal is to spy on its targets and steal financial data from businesses and their customers. Its attackers have previously stolen spreadsheets and documents with customer lists, investments, and trading operations; internal presentations; software licenses and credentials for trading software and platforms; browser cookies and session data; email credentials; credit card information; and proof of address and identity documents. The group has also obtained access to VPN configurations and other IT-related information. Like many threat groups, Evilnum starts with a phishing email. Messages contain a link to a ZIP file hosted in Google Drive. This archive has multiple LNK files designed to extract and execute a malicious JavaScript component while displaying a fake document. These "shortcut" files have "double extensions" to trick victims into believing they are harmless and opening them. These LNK files all do the same thing: When opened, a file searches its contents for lines with a specific marker and writes them to a JavaScript file. This malicious file is executed and then writes and opens a decoy file with the same name as the LNK file.



Quote for the day:

"Challenges are what make life interesting and overcoming them is what makes life meaningful." -- Joshua J. Marine

Daily Tech Digest - July 09, 2020

Diversity in tech: 3 stories of perseverance and success

It is easy to fall into comfortable patterns. We train for sports by developing muscle memory using repetition to engrain patterns in our brains. It takes an average of 66 days for a behavior to become a habit, and it can require 10 times the effort. Simply stated, hard work and dedication are the foundations for learning, whether learning a new language, improving your golf swing, or rethinking workforce demographics. Organizations are especially resistant to change, requiring cross-organizational commitment and a compelling business imperative. An uncompromising focus on change must cascade throughout an organization and be measured, managed, and reinforced. This resistance to change may explain, at least in part, why the underrepresentation of people of color in technology companies has shown little improvement since 2014. Ideally, the representation of blacks in technology should reflect the overall population, but it does not. According to the Census Bureau, blacks make up 13.4% of the U.S. population but account for only 5% of the workforce at technology companies, with women of color representing even less at 1%.


Pen Testing ROI: How to Communicate the Value of Security Testing

Defining the ROI of pen testing has its nuances, as there are seemingly no tangible results that come directly from the investment. When implementing a pen-testing strategy, you're actively avoiding a breach that could cost your organization money. But the cost of a breach is the most obvious data point for measuring ROI, and those estimates vary widely. My advice? Work toward maturing your security program to a point where the engagement with pen testers is focused on ensuring the effectiveness of existing controls and security touchpoints in your development life cycle — not solely to check a compliance box or single-handedly prevent a breach. Leveraging pen testing throughout the development life cycle can help identify issues in development before deployment rather than the costly discovery of vulnerabilities at a later date. Second, identify metrics, not measurements: Business decisions are often made using measurements, instead of metrics. But in most cases, driving decisions based on measurements (or raw data) can be misleading and end up with business leaders focusing time, effort, and budget on the wrong activities.


How to build a data architecture to drive innovation—today and tomorrow

To scale applications, companies often need to push well beyond the boundaries of legacy data ecosystems from large solution vendors. Many are now moving toward a highly modular data architecture that uses best-of-breed and, frequently, open-source components that can be replaced with new technologies as needed without affecting other parts of the data architecture. The utility-services company mentioned earlier is transitioning to this approach to rapidly deliver new, data-heavy digital services to millions of customers and to connect cloud-based applications at scale. For example, it offers accurate daily views on customer energy consumption and real-time analytics insights comparing individual consumption with peer groups. The company set up an independent data layer that includes both commercial databases and open-source components. Data is synced with back-end systems via a proprietary enterprise service bus, and microservices hosted in containers run business logic on the data. ... Exposing data via APIs can ensure that direct access to view and modify data is limited and secure, while simultaneously offering faster, up-to-date access to common data sets. 


Software Techniques for Lemmings

The performance of a system with thousands of threads will be far from satisfying. Threads take time to create and schedule, and their stacks consume a lot of memory unless their sizes are engineered, which won't be the case in a system that spawns them mindlessly. We have a little job to do? Let's fork a thread, call join, and let it do the work. This was popular enough before the advent of <thread> in C++11, but <thread> did nothing to temper it. I don't see <thread> as being useful for anything other than toy systems, though it could be used as a base class to which many other capabilities would then be added. Even apart from these Thread Per Whatever designs, some systems overuse threads because it's their only encapsulation mechanism. They're not very object-oriented and lack anything that resembles an application framework. So each developer creates his own little world by writing a new thread to perform a new function. The main reason for writing a new thread should be to avoid complicating the thread loop of an existing thread. Thread loops should be easy to understand, and a thread shouldn't try to handle various types of work that force it to multitask and prioritize them, effectively acting as a scheduler itself.


Cloud Security Mistakes Which Everyone Should Avoid

Cloud can be accessed virtually, by anyone who is possessing proper credentials, makes it convenient and vulnerable at the same time. Unlike physical servers that limit a number of admin users, and have more strict access permissions, cloud servers can never provide that level of security. That’s why many small business owners around the world still choose web hosting services that operate on physical servers, especially since you’re able to have a whole server just for your website if you choose a dedicated hosting plan. But virtual servers are much easier to access because of their access permissions that could sometimes be misused. Controlling access to data kept on the cloud is a tricky balancing act between giving people access to the tools they require to get the job done and protecting their data from getting into the wrong hands. Efficiently managing the data requires a comprehensive policy that not only controls who can access what data and from where, but involves monitoring to determine who accesses data, when, and from where to detect potential breaches or any inappropriate access. Therefore, it is vital to educate on how to secure their cloud sessions, including avoiding public networks and effective password management.


The Modern Hybrid App Developer

One of the most frustrating parts about building apps is the massive headache of releasing and waiting for new updates in the app stores. Because hybrid app developers build a big chunk of their app using web technology, they are able to update their app’s logic and UI in realtime any time they want, in a way that is allowed by Apple and Google because it’s not making binary changes (as long those updates continue to follow other ToS guidelines). Using a service like Appflow, developers can set up their native Capacitor or Cordova apps to pull in realtime updates across a variety of deployment channels (or environments), and even further customize different versions of their app for different users. Teams use this to fix bugs in their production apps, run a/b tests, manage beta channels, and more. Some services, like Appflow, even support deploying directly to the Apple and Google Play store, so teams can automate both binary and web updates. This is a major super power that hybrid app developers have today that native developers do not!


HSBC customers targeted in new smishing scam

The text phishing, or smishing campaign begins with a text message purporting to come from HSBC, informing its target that “a new payment has been made” through the HSBC app on their smartphone device. Targets are informed that if they were not responsible for this payment, they should visit a website to validate their bank account. To the untrained eye, the website link – security.hsbc.confirm-systems.com – could conceivably be legitimate, but obviously should on no account be opened. Victims will then be directed to a fake landing page and asked to input their username and password, along with a series of verification steps, on a fraudulent website that uses HSBC branding. The site will also try to weed out specific account details and other personally identifiable financial information (PIFI) from its targets. Griffin Law, which works with a number of accountancy groups and financial support teams in the London area, said it had seen a clear spike in reports of the scam, with almost 50 of its customers telling it they had received the smish so far. A number of them said they did not have any HSBC apps installed on their devices, which suggests the scam is quite indiscriminate in its targeting.


Card Skimmer Found Hitting Vulnerable E-Commerce Sites

Despite the large pool of potential targets, Malwarebytes has only been able to identify a few victims. "We found over a dozen websites that range from sports organizations, health, and community associations to (oddly enough) a credit union. They have been compromised with malicious code injected into one of their existing JavaScript libraries," Segura says. Some historical evidence of other victims who have been hit in the past was uncovered as part of his research, he says, but they have since been remediated. The total number of targets number is not available. The skimmer steals payment card numbers and tries to also swipe passwords, although the latter activity is not correctly implemented and does not always work, according to Malwarebytes. Segura says the skimmer is not that different from others currently operating in how it collects and exfiltrates data. The novelty is that it was only found on ASP.NET websites. "The skimmer is embedded in an existing JavaScript library used by a victim site. There are variations on how the code is structured but overall, it performs the same action of contacting remote domains belonging to the threat actor," Segura says.


MongoDB is subject to continual attacks when exposed to the internet

After seeing how consistently database breaches were occurring, Intruder planted honeypots to find out how these attacks happen, where the threats are coming from, and how fast it takes place. Intruder set up a number of unsecured MongoDB honeypots across the web, each filled with fake data. The network traffic was monitored for malicious activity and if password hashes were exfiltrated and seen crossing the wire, this would indicate that a database was breached. The research shows that MongoDB is subject to continual attacks when exposed to the internet. Attacks are carried out automatically and indiscriminately and on average an unsecured database is compromised less than 24 hours after going online. ... Attacks originated from locations all over the globe, though attackers routinely hide their true location, so there’s often no way to tell where attacks are really coming from. The fastest breach came from an attacker from Russian ISP Skynet and over half of the breaches originated from IP addresses owned by a Romanian VPS provider.


How data is fundamental to manufacturing’s digital transformation

The key to creating and deploying an effective data strategy comes down to three factors: sponsorship, a standardised platform and robust governance. Sponsorship is vital, according to Greg Hanson, particularly in larger organisations where buy-in can be more difficult to achieve. “Additionally, the successful deployment of that strategy requires engagement with the organisation as a whole, and a cultural acceptance of responsibility regarding data given GDPR and privacy laws,” he added. Helping to drive this combination of board-level sponsorship and enterprise-wide engagement are Chief Data Officers, newly-created executive roles tasked with deploying and monitoring the effectiveness of data strategies and the adoption of modern, cloud-based architectures – the foundation of many industrial digital transformation initiatives. “There are so many technologies readily available in the cloud space now that companies face the risk of ‘cloud sprawl’ which degrades the impact of their digital transformation and data management,” Hanson continued.



Quote for the day:

''Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else." -- Dr. Ken Blanchard

Daily Tech Digest - July 08, 2020

Why Are Real IT Cyber Security Improvements So Hard to Achieve?

It’s easy to point fingers in various directions to try to explain why we have done such a poor job of improving IT security over the years. Unfortunately, most of the places at which blame is typically directed bear limited, if any, responsibility for our lack of security. It’s hard to deny that software is more complex today than it was 10 or 20 years ago. The cloud, distributed infrastructure, microservices, containers and the like have led to software environments that change faster and involve more moving pieces. It’s reasonable to argue that this added complexity has made modern environments more difficult to secure. There may be some truth to this. But, on the flipside, you have to remember that the complexity brings new security benefits, too. In theory, distributed architectures, microservices and other modern models make it easier to isolate or segment workloads in ways that should mitigate the impact of a breach. Thus, I think it’s simplistic to say that the reason IT cyber security remains so poor is that software has grown more complex, and that security strategies and tools have not kept pace. You could argue just as plausibly that modern architectures should have improved security.


Facebook is recycling heat from its data centers to warm up these homes

The tech giant stressed that the heat distribution system it has developed uses exclusively renewable energy. The data center is entirely supplied by wind power, and Fjernvarme Fyn's facility only uses pumps and coils to transfer the heat. As a result, the project is expected to reduce Odense's demand for coal by up to 25%. Although Facebook is keen to use the heat recovery system in other locations, the company didn't reveal any plans to export the technology just yet. "Our ability to do heat recovery depends on a number of factors, so we will evaluate them first," said Edelman. For example, the proximity of the data center to the community it can provide heat for will be a key criteria to consider.  Improving data centers' green credentials has been a priority for technology companies as of late. Google recently showcased a new tool that can match the timing of some compute tasks in data centers to the availability of lower-carbon energy.  The platform can shift non-urgent workloads to times of the day when wind or solar sources of energy are more plentiful. The search giant is aiming for "24x7 carbon-free energy" in all of its data centers, which means constantly matching facilities with sources of carbon-free power.


Understanding When to Use a Test Tool vs. a Test System

A system is a group of parts that interact in concert to form a unified whole. A system has an identifiable purpose. For example, the purpose of a school system is to educate students. The purpose manufacturing system is to produce one or many end products. In turn, the purpose of a testing system is to ensure that features and functions within the scope of the software's entire domain operate to specified expectations. Typically a testing system is made of parts that test specific aspects of the software under consideration. However, unlike a testing tool, which is limited in scope, a testing system encompasses all the testing that takes place within the SDLC. Thus a testing system needs to support all aspects of software testing throughout the SDLC in terms of execution, data collection, and reporting. First and foremost, a testing system needs to be able to control testing workflows. This means that the system can execute tests according to a set of predefined events. For example, when new code is committed to a source control repository, or when a new or updated component is ready to be added to an existing application.


Wi-Fi 6E: When it’s coming and what it’s good for

There’s so much confusion around all the 666 numbers, it’ll scare you to death. You’ve got Wi-Fi 6, Wi-Fi 6E – and Wi-Fi 6 still has additional enhancements coming after that, with multi-user multiple input, multiple output (multi-user MIMO) functionalities. Then there’s the 6GHz spectrum, but that’s not where Wi-Fi 6 gets its name from: It’s the sixth generation of Wi-Fi. On top of all that, we are just getting a handle 5G and there already talking about 6G – seriously, look it up – it's going to get even more confusing. ... The last time we got a boost in UNII-2 and UNII-2 Extended was 15 years ago and smartphones hadn’t even taken off yet. Now being able to get 1.2GHz is enormous. With Wi-Fi 6E, we’re not doubling the amount of Wi-Fi space, we're actually quadrupling the amount of usable space. That’s three, four, or five times more spectrum, depending on where you are in the world. Plus you don't have to worry about DFS [dynamic frequency selection], especially indoors. Wi-Fi 6E is not going to be faster than Wi-Fi 6 and it’s not adding enhanced technology features. The neat thing is operating the 6GHz will require Wi-Fi 6 or above clients. So, we’re not going to have any slow clients and we’re not going to have a lot of noise.


AI Tracks Seizures In Real Time

In brain science, the current understanding of most seizures is that they occur when normal brain activity is interrupted by a strong, sudden hyper-synchronized firing of a cluster of neurons. During a seizure, if a person is hooked up to an electroencephalograph—a device known as an EEG that measures electrical output—the abnormal brain activity is presented as amplified spike-and-wave discharges. “But the seizure detection accuracy is not that good when temporal EEG signals are used,” Bomela says. The team developed a network inference technique to facilitate detection of a seizure and pinpoint its location with improved accuracy. During an EEG session, a person has electrodes attached to different spots on their head, each recording electrical activity around that spot. “We treated EEG electrodes as nodes of a network. Using the recordings (time-series data) from each node, we developed a data-driven approach to infer time-varying connections in the network or relationships between nodes,” Bomela says. Instead of looking solely at the EEG data—the peaks and strengths of individual signals—the network technique considers relationships.


How to Calculate ROI on Infrastructure Automation

The equation is simple. You have a long, manual process. You figure out a way to automate it. Ta-da! What once took two hours now takes two minutes. And you save sweet 118 minutes. If you run this lovely piece of automation very frequently, the value is multiplied. Saving 118 minutes 10 times a day is very significant. Like magic. ... Back to the value formula. In real life, there are more facets to this formula. One of the factors that affect the value you get from automation is how many people have access to it. You can automate something that can potentially run 2,000 times a day, every day; this could be a game-changer in terms of value. But if this is something that 2,000 different people need to do, there is also the question of how accessible your automation is. Getting your automation to run smoothly by other people is not always a piece of cake (“What’s your problem?! It’s in git! Yes, you just get it from there. I’ll send you the link. You don’t have a user? Get a user! You can’t run it? Of course, you can’t, you need a runtime. Just get the runtime. It’s all in the readme! Oh, wait, the version is not in the readme. Get 3.0, it only works with 3.0. Oh, and you edited the config file, right?”).


The most in-demand IT staff companies want to hire

Companies want people who are good communicators and who will be proactive--for example, quickly addressing a support ticket that comes in in the morning, so users don't have to wait, Wallenberg added. In terms of security hiring trends, "there have always been really brilliant people who can sell the need for security to the business,'' and that is needed now more than ever in IT, he said. "In a perfect world, it shouldn't have taken high-profile breaches of personal and identifiable information for companies to wake up and say we need to invest more money in it. So security leadership and, further down the pole, they have to sell their vision on steps they need to take to more systematically ensure systems are safe and companies are protected from threats." Because of the current climate, it is also critical that companies are prepared to handle remote onboarding of new tech team members, Wallenberg said. "Companies that adopted a cloud-first strategy years ago are in a much better position to onboard [new staff] than people who need an office network to connect,'' he said. 


An enterprise architect's guide to the data modeling process

Conceptual modeling in the process is normally based on the relationship between application components. The model assigns a set of properties for each component, which will then define the data relationships. These components can include things like organizations, people, facilities, products and application services. The definitions of these components should identify business relationships. For example, a product ships from a warehouse, and then to a retail store. An effective conceptual data model diligently traces the flow of these goods, orders and payments between the various software systems the company uses. Conceptual models are sometimes translated directly into physical database models. However, when data structures are complex, it's worth creating a logical model that sits in between. It populates the conceptual model with the specific parametric data that will, eventually, become the physical model. In the logical modeling step, create unique identifiers that define each component's property and the scope of the data fields.


Microsoft's ZeRO-2 Speeds up AI Training 10x

Recent trends in NLP research have seen improved accuracy from larger models trained on larger datasets. OpenAI have proposed a set of "scaling laws" showing that model accuracy has a power-law relation with model size, and recently tested this idea by creating the GPT-3 model which has 175 billion parameters. Because these models are simply too large to fit in the memory of a single GPU, training them requires a cluster of machines and model-parallel training techniques that distribute the parameters across the cluster. There are several open-source frameworks available that implement efficient model parallelism, including GPipe and NVIDIA's Megatron, but these have sub-linear speedup due to the overhead of communication between cluster nodes, and using the frameworks often requires model refactoring. ZeRO-2 reduces the memory needed for training using three strategies: reducing model state memory requirements, offloading layer activations to the CPU, and reducing memory fragmentation. 


The unexpected future of medicine

Along with robots, drones are being enlisted as a way of stopping the person-to-person spread of coronavirus. Deliveries made by drone rather than by truck, for example, remove the need for a human driver who may inadvertently spread the virus. A number of governments have already drafted drones in to help with distributing PPE to hospitals in need of kit: in the UK, a trial of drones taking equipment from Hampshire to the Isle of Wight was brought forward following the COVID-19 outbreak. In Ghana, drones have also been put to work collecting patients samples for coronavirus testing, bringing the tests from rural areas into hospitals in more populous regions for testing. Meanwhile, in several countries, drones are also being used to drop off medicine to people in remote communities or those who are sheltering in place. Drones have also been used to disinfect outdoor markets and other areas to slow the spread of the disease. And in South Korea, drones have been drafted in to celebrate healthcare workers and spread public health messages, such as reminding people to continue wearing masks and washing their hands.



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - July 07, 2020

Taking Steps To Boost Automated Cloud Governance

Lippis says cloud providers often talk about a shared responsibility model where the users take active roles in the process. The trouble is that the feedback and communication organizations receive is not always clear. He compared cloud providers to landlords who maintain and upgrade apartment buildings with the users as the tenants. Updating the property is the landlord’s responsibility. However, some cloud providers do not always provide much information about what is being changed and upgraded, Lippis says. Such breakdowns in communication and control could throw the enterprises out of compliance, he says, which they might not be known until an audit is conducted. There is a need for better transparency, Lippis says, so organizations know what is happening when changes are made, or events occur. This is can be of particular concern when organizations adopt multicloud approaches, matching workloads to different cloud providers. Security questions may arise because each cloud provider might communicate information to users in varied ways. “It could be the same kind of event, but they’re all coded differently,” Lippis says. “The syntax is different.”


Applying the 80-20 Rule to Cybersecurity

According to Mike Gentile — president and CEO at CISOSHARE and someone who has worked as a chief information security officer for many years — a lot has changed in the security space by 2020, but two things remain the same: Senior executives don't prioritize cybersecurity enough for security programs to be fully effective; and The reason for point No. 1 is not that executives don't care — they do, and they don't want their name in the headlines after a breach — but that they lack a clear definition of security. Each organization's unique definition of security should be set forth in a security charter document, which prescribes a mission and mandate for the security program as well as governance structures and clarified roles or responsibilities. More specifically, the charter defines how and where the security organization reports and answers questions such as: Should the business have a CISO, and should the position report to IT or to the CEO? Typically, a consultant's answer would be "It depends." But don't let that end the discussion: For any one business, there is one right answer. 


Talking Digital Future: Artificial Intelligence

This topic is especially cool in the healthcare domain. Think about how medicine works today. Medical practitioners go to school for many, many years, memorize a lot of information, then treat patients, get experience, and over the span of their career, become quite good at what they do. However, they are ultimately subject to the weaknesses of their own mortal existence. They can forget things; they can be absent-minded or, you know, just not connect the dots sometimes. Now, if we can equip a doctor physician with a computer to improve memory, options and optimization, the tools and the ability to provide medical aid suddenly change. Let’s look at IBM’s AI initiative Watson combined with an oncologist treating a cancer patient, for example. Each patient is different, so the doctor wants to have as many details as possible about this type of cancer and the patient’s medical history to make the best treatment plan. An AI-augmented device produced for the doctor’s team could generate a scenario based on the data of every patient that has had this particular set of circumstances and that person’s characteristics.


How Agile Turns Risk Into Opportunity

Changing the way large numbers of people in a corporation think is a monumental undertaking. It doesn’t come easily or quickly. But what is the alternative? Firms not operating in this way have been struggling, even in normal times, and they are steadily going out of business, exactly as Nokia was forced out of the phone business despite its massive wealth and large market share. Nokia didn’t change in 2010 because of a crisis or because it wanted to: it had to change because its phone business was bankrupt, even though it had been the dominant phone firm in the whole wide world, only three years before. That kind of story is now playing out, in sector after sector all around the world. As a result, there is now huge interest, even in large corporations, to find out what’s involved and learn how to think differently. ... But today, for most people, these changes make life quicker, simpler, more convenient, and, let’s face it: generally better. And people have responded with their wallets. The firms that provide these services have earned their profits and their stratospheric valuations. They have changed our lives fundamentally.


With eCommerce on the rise, tokenization is the ticket to taming fraud

With more merchants and retailers adopting tokenization technology, Visa is scaling our credential-on-file tokenization efforts. Since our first merchant began processing card-on-file tokens in 2017, we have seen more than 13,000 merchants start transacting with Visa tokens. In addition to enhancing security, tokenization also helps reduce friction in the payment process, because customers do not have to manually update stored card information if their Visa card is lost, stolen or expires. Instead, financial institutions can automatically update expired or compromised payment credentials. This can reduce missed payments for merchants, and help consumers avoid unwanted late payment fees or charges. Looking ahead, we are unveiling Token ID, a new solution stemming from our acquisition of the Rambus Payments token services business that expands Visa’s tokenization across all global and domestic networks, as well as tokenizing beyond card use cases. In addition, we are looking for ways to centralize and simplify token management through integration with our CyberSource platform to help to secure customer payment data, improve payment conversions and ease PCI compliance implications. 


Debunking the Myths about Artificial Intelligence

Organizations should not look for decades of experience in any given field of science if the entire organization is new to that field. Culture will eat those kinds of unconscious attempts. First, we need advocates to focus on people, character, and talent, not tech per se. Transformation starts at the individual level. In response, you are right to say that “Speed is important.” but that consideration is due to the fact that you feel FOMO, organizational isomorphism, and speed hunger as a result of digital disruption. When organizations see the AI show-offs by disruptors, they impatiently consider it as an overnight success/fail. Once you build the foundation for an appropriate digital culture, you can first elaborate on leaner-faster-better AI initiatives. Finally, among the 5W1H questions about AI, “Why” and “How” are critical instead of “What.” We should not directly rush into learning the new digital technologies. Rather, we should focus on “Why” and “How” those technologies popped up nowadays, not a decade ago, though they were there for decades in the literature.


Remote workers aren't taking security seriously. Now that has to change

Darren Fields, VP of Networking EMEA at Citrix, told TechRepublic: "The rapid shift to working from home has created the conditions for shadow IT to become an increasingly important issue. Whilst it is understandable that employees needed to adapt quickly to new pressures and concerns, given the global pandemic, it is important that businesses tighten up on these procedures going forward in order to safeguard their organisation from external threats." Citrix isn't the only organization to have spotted this trend: a recent study from Trend Micro also found people showing a lax attitude to following their company's IT security policies, with 56% of respondents admitted to using a non-work application on a work device and a third of respondents saying they did not give much thought to whether the apps they use are approved by IT or not. Earlier research also commissioned by Citrix found that seven in 10 respondents were concerned about information security as a result of employees using shadow IT or unsanctioned software, with three in five seeing shadow IT as a significant risk to their organisation's data compliance.


Smarter spending can accelerate Covid-19 recovery and renewa

Decision makers must not fear spending unless it is done on the wrong things. Prioritise and accelerate income-generated activities, whilst carefully reassessing the risk of business activities that rely on consumer presence and human interaction, considering the safety of staff and customers. Business activities that aren’t delivering value, either as revenue or investment, should be deprioritised.  ... Openly discuss emotions and their power to obstruct recovery. When problems arise, work through diagnostics calmly, utilising the information gathered to earn revenue in the new situations. Although we can’t use past data to predict the future with certainty, we can take advantage of early indicators of revenue recovery. Actively seek out more useful data, but be wary of confirmation bias — interpreting data as a validation of preconceived ideas. ... Confront preconceptions in a challenging market. Communicate clearer business vision to overcome emotional reactions, adapting to find the right balance between positive affirmation and realistic expectations. Inform investors and suppliers of business expectations, building confidence that you’re best able to manage the risks through innovation.


How to select the right IoT database architecture

Static databases, also known as batch databases, manage data at rest. Data that users need to access resides as stored data managed by a database management system (DBMS). Users make queries and receive responses from the DBMS, which typically, but not always, uses SQL. A streaming database handles data in motion. Data constantly streams through the database, with a continuous series of posed queries, typically in a language specific to the streaming database. The streaming database's output may ultimately be stored elsewhere, such as in the cloud, and accessed via standard query mechanisms. Streaming databases are typically distributed to handle the scale and load requirements of vast volumes of data. Currently, there are a range of commercial, proprietary and open source streaming databases, including Google Cloud Dataflow, Microsoft StreamInsight, Azure Stream Analytics, IBM InfoSphere Streams and Amazon Kinesis. Open source systems are largely based around Apache and include Apache Spark Streaming provided by Databricks, Apache Flink provided by Data Artisans, Apache Kafka provided by Confluent and Apache Storm, which is owned by Twitter.


11 Patterns to Secure Microservice Architectures

Third-party dependencies make up 80% of the code you deploy to production. Many of the libraries we use to develop software depend on other libraries. Transitive dependencies lead to a large chain of dependencies, some of which might have security vulnerabilities. You can use a scanning program on your source code repository to identify vulnerable dependencies. You should scan for vulnerabilities in your deployment pipeline, in your primary line of code, in released versions of code, and in new code contributions. ... You should use HTTPS everywhere, even for static sites. If you have an HTTP connection, change it to an HTTPS one. Make sure all aspects of your workflow—from Maven repositories to XSDs—refer to HTTPS URIs. HTTPS has an official name: Transport Layer Security. It’s designed to ensure privacy and data integrity between computer applications. How HTTPS Works is an excellent site for learning more about HTTPS. ... OAuth 2.0 has provided delegated authorization since 2012. OpenID Connect added federated identity on top of OAuth 2.0 in 2014. Together, they offer a standard spec you can write code against and have confidence that it will work across IdPs.



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - July 06, 2020

Benefits of RPA: RPA Best Practices for successful digital transformation

A main benefit of RPA solutions is that they reduce human error while enabling employees to feel more human by engaging in conversations and assignments that are more complex but could also be more rewarding. For instance, instead of having a contact center associate enter information while also speaking with a customer, an RPA solution can automatically collect, upload, or sync data into with other systems for the associate to approve while focusing on forming an emotional connection with the customer. Another impact of RPA is it can facilitate and streamline employee onboarding and training. An RPA tool, for instance, can pre-populate forms with the new hire’s name, address, and other key data from the resume and job application form, saving the employee time. For training, RPA can conduct and capture data from training simulations, allowing a global organization to ensure all employees receive the same information in a customized and efficient manner. RPA is not for every department and it’s certainly not a panacea for retention and engagement problems. But by thinking carefully about the benefits that it offers to employees, RPA can transform workflows—making employees’ jobs less robotic and more rewarding.


Hey Alexa. Is This My Voice Or a Recording?

The idea is to quickly detect whether a command given to a device is live or is prerecorded. It's a tricky proposition given that a recorded voice has characteristics similar to a live one. "Such attacks are known as one of the easiest to perform as it simply involves recording a victim's voice," says Hyoungshick Kim, a visiting scientist to CSIRO. "This means that not only is it easy to get away with such an attack, it's also very difficult for a victim to work out what's happened." The impacts can range from using someone else's credit card details to make purchases, controlling connected devices such as smart appliances and accessing personal data such home addresses and financial data, he says. The voice-spoofing problem has been tackled by other research teams, which have come up with solutions. In 2017, 49 research teams submitted research for the ASVspoof 2017 Challenge, a project aimed at developing countermeasures for automatic speaker verification spoofing. The ASV competition produced one technology that had a low error rate compared to the others, but it was computationally expensive and complex, according to Void's research paper.


Reduce these forms of AI bias from devs and testers

Cognitive bias means that individuals think subjectively, rather than objectively, and therefore influence the design of the product they're creating. Humans filter information through their unique experience, knowledge and opinions. Development teams cannot eliminate cognitive bias in software, but they can manage it. Let's look at the biases that most frequently affect quality, and where they appear in the software development lifecycle. Use the suggested approaches to overcome cognitive biases, including AI bias, and limit their effect on software users. A person knowledgeable about a topic finds it difficult to discuss from a neutral perspective. The more the person knows, the harder neutrality becomes. That bias manifests within software development teams when experienced or exceptional team members believe that they have the best solution. Infuse the team with new members to offset some of the bias that occurs with subject matter experts. Cognitive bias often begins in backlog refinement. Preconceived notions about application design can affect team members' critical thinking. During sprint planning, teams can fall into the planning fallacy: underestimating the actual time necessary to complete a user story.


Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud

A different approach to bridging the worlds of on-prem data centers and the growing variety of cloud computing services is offered by a company called Alluxio. From their roots at the Berkeley Amp Labs, they've been focused on solving this problem. Alluxio decided to bring the data to computing in a different way. Essentially, the technology provides an in-memory cache that nestles between cloud and on-prem environments. Think of it like a new spin on data virtualization, one that leverages an array of cloud-era advances. According to Alex Ma, director of solutions engineering at Alluxio: "We provide three key innovations around data: locality, accessibility and elasticity. This combination allows you to run hybrid cloud solutions where your data still lives in your data lake." The key, he said, is that "you can burst to the cloud for scalable analytics and machine-learning workloads where the applications have seamless access to the data and can use it as if it were local--all without having to manually orchestrate the movement or copying of that data."


Redis and open source succession planning

Speaking of the intersection of open source software development and cloud services, open source luminary Tim Bray has said, “The qualities that make people great at carving high-value software out of nothingness aren’t necessarily the ones that make them good at operations.” The same can be said of maintaining open source projects. Just because you’re an amazing software developer doesn’t mean you’ll be a great software maintainer, and vice versa. Perhaps more pertinently to the Sanfilippo example, developers may be good at both, yet not be interested in both. (By all accounts Sanfilippo has been a great maintainer, though he’s the first to say he could become a bottleneck because he liked to do much of the work himself rather than relying on others.) Sanfilippo has given open source communities a great example of how to think about “career” progression within these projects, but the same principle applies within enterprises. Some developers will thrive as managers (of people or of their code), but not all. As such, we need more companies to carve out non-management tracks for their best engineers, so developers can progress their career without leaving the code they love. 


How data science delivers value in a post-pandemic world

The uptick in the need for data science, across industries, comes with the need for data science teams. While hiring may have slowed down in the tech sector – Google slowed its hiring efforts during the pandemic – data scientists professionals are still in high demand. However, it’s important to keep a close eye on how these teams continue to evolve. One position which is increasingly in-demand as businesses become more data-driven is the role of the Algorithm Translator. This person is responsible for translating business problems into data problems and, once the data answer is found, articulating this back into an actionable solution for business leaders to apply. The Algorithm Translator must first break down the problem statement into use cases, connect these use cases with the appropriate data set, and understand any limitations on the data sources so the problem is ready to be solved with data analytics. Then, in order to translate the data answer into a business solution, the Algorithm Translator must stitch the insights from the individual use cases together to create a digestible data story that non-technical team members can put into action.


Open source contributions face friction over company IP

Why the change? Companies that have established open source programs say the most important factor is developer recruitment. "We want to have a good reputation in the open source world overall, because we're hiring technical talent," said Bloomberg's Fleming. "When developers consider working for us, we want other people in the community to say 'They've been really contributing a lot to our community the last couple years, and their patches are always really good and they provide great feedback -- that sounds like a great idea, go get a job there.'" While companies whose developers contribute code to open source produce that code on company time, the company also benefits from the labor of all the other organizations that contribute to the codebase. Making code public also forces engineers to adhere more strictly to best practices than if it were kept under wraps and helps novice developers get used to seeing clean code.


How Ekans Ransomware Targets Industrial Control Systems

The Ekans ransomware begins the attack by attempting to confirm its target. This is achieved by resolving the domain of the targeted organization and comparing this resolved domain to a specific list of IP addresses that have been preprogrammed, the researchers note. If the domain doesn't match the IP list, the ransomware aborts the attack. "If the domain/IP is not available, the routine exits," the researchers add. If the ransomware does find a match between the targeted domain and the list of approved IP addresses, Ekans then infects the domain controller on the network and runs commands to isolate the infected system by disabling the firewall, according to the report. The malware then identifies and kills running processes and deletes the shadow copies of files, which makes recovering them more difficult, Hunter and Gutierrez note. In the file stage of the attack, the malware uses RSA-based encryption to lock the target organization's data and files. It also displays a ransom note demanding an undisclosed amount in exchange for decrypting the files. If the victim fails to respond within first 48 hours, the attackers then threaten to publish their data, according to the Ekans ransom recovered by the FortiGuard researchers.


The best SSDs of 2020: Supersized 8TB SSDs are here, and they're amazing

If performance is paramount and price is no object, Intel’s Optane SSD 905P is the best SSD you can buy, full stop—though the 8TB Sabrent Rocket Q NVMe SSD discussed above is a strong contender if you need big capacities and big-time performance. Intel’s Optane drive doesn’t use traditional NAND technology like other SSDs; instead, it’s built around the futuristic 3D Xpoint technology developed by Micron and Intel. Hit that link if you want a tech deep-dive, but in practical terms, the Optane SSD 900P absolutely plows through our storage benchmarks and carries a ridiculous 8,750TBW (terabytes written) rating, compared to the roughly 200TBW offered by many NAND SSDs. If that holds true, this blazing-fast drive is basically immortal—and it looks damned good, too. But you pay for the privilege of bleeding edge performance. Intel’s Optane SSD 905P costs $600 for a 480GB version and $1,250 for a 1.5TB model, with several additional options available in both the U.2 and PCI-E add-in-card form factors. That’s significantly more expensive than even NVMe SSDs—and like those, the benefits of Intel’s SSD will be most obvious to people who move large amounts of data around regularly.


SRE: A Human Approach to Systems

Failure will happen, incidents will occur, and SLOs will be breached. These things may be difficult to face, but part of adopting SRE is to acknowledge that they are the norm. Systems are made by humans, and humans are imperfect. What’s important is learning from these failures and celebrating the opportunity to grow. One way to foster this culture is to prioritize psychological safety in the workplace. The power of safety is very obvious but often overlooked. Industry thought leaders like Gene Kim have been promoting the importance of feeling safe to fail. He addresses the issue of psychological insecurity in his novel, “The Unicorn Project.” Main character Maxine has been shunted from a highly-functional team to Project Phoenix, where mistakes are punishable by firing. Gene writes “She’s [Maxine] seen the corrosive effects that a culture of fear creates, where mistakes are routinely punished and scapegoats fired. Punishing failure and ‘shooting the messenger’ only cause people to hide their mistakes, and eventually, all desire to innovate is completely extinguished.”



Quote for the day:

"Education: the path from cocky ignorance to miserable uncertainty." -- Mark Twain