Daily Tech Digest - October 03, 2020

Years-Long ‘SilentFade’ Attack Drained Facebook Victims of $4M

“Our investigation uncovered a number of interesting techniques used to compromise people with the goal to commit ad fraud,” said Sanchit Karve and Jennifer Urgilez with Facebook, in a Thursday analysis unveiled this week at the Virus Bulletin 2020 conference. “The attackers primarily ran malicious ad campaigns, often in the form of advertising pharmaceutical pills and spam with fake celebrity endorsements.” Facebook said that SilentFade was not downloaded or installed by using Facebook or any of its products. It was instead usually bundled with potentially unwanted programs (PUPs). PUPs are software programs that a user may perceive as unwanted; they may use an implementation that can compromise privacy or weaken user security. In this case, researchers believe the malware was spread via pirated copies of popular software (such as the Coreldraw Graphics graphic design software for vector illustration and page layout, as seen below). Once installed, SilentFade stole Facebook credentials and cookies from various browser credential stores, including Internet Explorer, Chromium and Firefox.


How to be great at people analytics

Most companies still face critical obstacles in the early stages of building their people analytics capabilities, preventing real progress. The majority of teams are still in the early stages of cleaning data and streamlining reporting. Interest in better data management and HR technologies has been intensive, but most companies would agree that they have a long way to go. Leaders at many organizations acknowledge that what they call their “analytics” is really basic reporting with little lasting impact. For example, a majority of North American CEOs indicated in a poll that their organizations lack the ability to embed data analytics in day-to-day HR processes consistently and to use analytics’ predictive power to propel better decision making.3 This challenge is compounded by the crowded and fragmented landscape of HR technology, which few organizations know how to navigate. So, while the majority of people analytics teams are still taking baby steps, what does it mean to be great at people analytics? We spoke with 12 people analytics teams from some of the largest global organizations in various sectors—technology, financial services, healthcare, and consumer goods—to try to understand what teams are doing, the impact they are having, and how they are doing it.


6 Data Management Tips for Small Business Owners

You might not have the vast resources and people-power of your larger competitors, but even small e-commerce organizations can glean useful insights from data if it is presented in an engaging way. Rather than relying on raw, potentially overwhelming databases full of indecipherable figures, you should aim to generate reports which showcase pertinent trends visually. This should let you analyze information more precisely and without needing to spend hours sifting through spreadsheets. In addition, data visualization has the benefit of making it straightforward to share your findings with others, whether or not they have a background in data science and analysis. A chart or graph can express everything you need to get across in a presentation about sales projections, site performance, and customer satisfaction, without needing lengthy verbal explanations as well. While the biggest scandals involving data loss and theft tend to hit the headlines whenever they involve major organizations and internationally recognized brands, that does not mean that smaller firms are immune from scrutiny in this respect.


Metasploit — A Walkthrough Of The Powerful Exploitation Framework

If you hack someone without permission, there is a high chance that you will end up in jail. So if you are planning to learn hacking with evil intentions, I am not responsible for any damage you cause. All my articles are purely educational. So, if hacking is bad, why learn it in the first place? Every device on the internet is vulnerable by default unless someone secures it. It's the job of the penetration tester to think like a hacker and attack their organization’s systems. The penetration tester then informs the organization about the vulnerabilities and advises on patching them. Penetration testing is one of the highest-paid jobs in the industry. There is always a shortage of pen-testers since the number of devices on the internet is growing exponentially. I recently wrote an article on the top ten tools you should know as a cybersecurity engineer. If you are interested in learning more about cybersecurity, check out the article here. Right. Enough pep talk. Let’s look at one of the coolest pen-testing tools in the market — Metasploit. ... Metasploit is an open-source framework written in Ruby. It is written to be an extensible framework, so that if you want to build custom features using Ruby, you can easily do that via plugins.


IoT in Manufacturing: The Success Story Nobody's Talking About

Efficient manufacturing processes rely almost entirely on predictability. Factory operators need to know how long each step in a process takes, what resources are needed, and how long the process can operate continuously before needing breaks for maintenance and other periodic tasks. That overarching need for predictability makes it difficult for operators to know how the addition of new equipment might impact output. It also makes them hesitant to make changes to existing equipment, even if they’re all but certain that the changes would be an improvement. That brings us to another vital and emerging use of IoT technology in manufacturing. Factory operators are using the myriad data streaming from their connected devices to make precise computer models of their industrial equipment. These digital twins, as they’re known, allow operators to test any proposed equipment tweaks or replacements to see the exact effect they’ll have on the output. This helps them to make seamless upgrades and changes to their processes without fear of upsetting the delicate balance that ensures predictability. If the question is whether IoT is living up to its promise and proving useful in manufacturing – the answer is a resounding yes.


Digital Transformation Can Be Risky. Here’s What You Need To Know

The business mantra “culture eats strategy for breakfast” applies differently when you’re talking about digital transformation, said Pam Hrubey, managing director in consulting services at Crowe. For example, an American-headquartered durable goods company acquired businesses across the globe. The company needed to upgrade equipment and streamline IT processes, but it chose to begin the transformation by attempting to align cultures between the parent company and the businesses abroad. Its initial process led to discontent among international workers who ended up feeling like outsiders because they were not made aware that the goal was to sync technology and processes. “To transform a business practice or to change a business model, you have to have a robust plan,” Hrubey said. “When you start with culture you often confuse people if you don’t have a plan in place, if people don’t understand what change is planned or why a change is necessary.” Companies also need to understand that a transformation affects the entire organization and might include stakeholders across departments.  “So many different people in the company need to come together to do it right,” said Czerwinski.


Data Protection Techniques Needed to Guarantee Privacy

Traditionally, a risk hierarchy existed between these two types of attributes. Direct identifiers were perceived as more “sensitive” than quasi-identifiers. In many data releases, only the former attributes were subject to some privacy protection mechanism, while the latter were released in clear. Such releases were often followed by prompt re-identification of the supposedly ‘protected’ subjects. It soon became apparent that quasi-identifiers could be just as ‘sensitive’ as direct identifiers. With the GDPR, this notion has finally made it into law: both types of attributes are put on the same level, identifiers and quasi-identifiers attributes are personal data and present an equally important privacy breach risk. Nowadays protection laws strictly regulate personal data processing. This makes a strong case for implementing privacy protection techniques. Indeed, failure to comply exposes companies to severe penalties. Besides, implementing proper privacy protections might lead to customer trust increase. In a world plagued by data breaches and privacy violations, people are increasingly concerned about what happens to their data. And finally, data breaches targeting personal data are costing companies money. Personal data remains the most expensive item to lose in a breach.


How AI Is Used in Data Center Physical Security Today

"There is a critical need to make full use of the massive amounts of data being generated by video surveillance cameras and AI-based solutions are the only practical answer," Memoori managing director James McHale said in a recent report. Video surveillance cameras generate a massive amount of data, McHale told DCK, and AI is the only practical way to process it all. AI systems can also be used to analyze thermal images. "Thermal cameras have been a significant growth area this year as a direct consequence of the COVID-19 pandemic," he told us. Today, many thermal cameras are just thermal information, but customers are increasingly looking for systems with cameras that can collect both thermal and traditional images and apply neural network algorithms for processing them. But there's a general lack of understanding about how to use this technology appropriately for pandemic controls, he added. Plus, the pandemic is negatively affecting some sectors of the economy, impacting spending and changing the way that companies buy technology. "Customers will be demanding more value from their investments and will be less willing to commit to upfront capital expenditure," he said.


QR Codes: A Sneaky Security Threat

Hacking an actual QR code would require some serious skills to change around the pixelated dots in the code’s matrix. Hackers have figured out a far easier method instead. This involves embedding malicious software in QR codes (which can be generated by free tools widely available on the internet). To an average user, these codes all look the same, but a malicious QR code can direct a user to a fake website. It can also capture personal data or install malicious software on a smartphone that initiates actions like this: Add a contact listing: Hackers can add a new contact listing on the user’s phone and use it to launch a spear phishing or other personalized attack; Initiate a phone call: By triggering a call to the scammer, this type of exploit can expose the phone number to a bad actor; Text someone: In addition to sending a text message to a malicious recipient, a user’s contacts could also receive a malicious text from a scammer; Write an email: Similar to a malicious text, a hacker can draft an email and populate the recipient and subject lines. Hackers could target the user’s work email if the device lacks mobile threat protection ...


Exploiting enhanced data management to create value in the ‘new normal’

The pandemic has fundamentally changed the way people view, access and retrieve data. It has also put new burdens on already stretched IT departments and electronic delivery – now that the footprint of use has extended to people’s homes. Data management upgrades can deliver significant benefits: An investment in advanced data management services offers the opportunity to automate and enhance process and workflow efficiency, eliminating errors and freeing up staff to focus on creating value elsewhere; and Machine Learning technologies offer new opportunities to make better use of your data – to implement data copy management now that digital archives have become even more important, apply proper retention strategies, as well as unearth new revenue streams and cost saving opportunities. ... These days, virtually every human on the planet is taking up data, and the pandemic has made consumption grow even faster. Each meme or news story shared and every meeting recorded all needs to be stored somewhere. And the larger the army of remote workers conducting business from their home offices, the greater data storage capacity will be required by every company.



Quote for the day:

"However beautiful the strategy, you should occasionally look at the results." -- Winston Churchill

Daily Tech Digest - October 02, 2020

Time to reset long-held habits for a new reality

With an extended crisis a real possibility, new habits must be adopted and embraced for the business to adapt, recover and operate successfully in the long-term. It’s important for CIOs to take the time to understand these habits, how they have formed and if they are here to stay. One of the more obvious habit changes we’ve all experienced is the shift from physical meetings, where cases were presented and decisions made in person, to virtual conferences. This has made people feel more exposed in decision making as the human interaction of reading body language has been lost. However, people have unknowingly started using data more and have shifted to making more data driven decisions. If new and initial habits are here to stay for the long-term, CIOs must embed them into the new DNA of the business. If they aren’t, however, it’s crucial to curb and manage these new habits before they become automatically ingrained and costly to reverse. This happened to a CIO I recently spoke with, who made a massive technology investment, changed vendors and even shortened office leases in the rush to shift their organisation to a remote working model. 


Getting Serious About Data and Data Science

The obvious approach to addressing these mistakes is to identify wasted resources and reallocate them to more productive uses of data. This is no small task. While there may be budget items and people assigned to support analytics, AI, architecture, monetization, and so on, there are no budgets and people assigned to waste time and money on bad data. Rather, this is hidden away in day-in, day-out work — the salesperson who corrects errors in data received from marketing, the data scientist who spends 80% of his or her time wrangling data, the finance team that spends three-quarters of its time reconciling reports, the decision maker who doesn’t believe the numbers and instructs his or her staff to validate them, and so forth. Indeed, almost all work is plagued by bad data. The secret to wasting less time and money involves changing one’s approach from the current “buyer/user beware” mentality, where everyone is left on their own to deal with bad data, to creating data correctly — at the source. This works because finding and eliminating a single root cause can prevent thousands of future errors and eliminate the need to correct them downstream. This saves time and money — lots of it! The cost of poor data is on the order of 20% of revenue, and much of that expense can be eliminated permanently.


Most Data Science Projects Fail, But Yours Doesn’t Have To

Through data science automation, companies are not only able to fail faster (which is a good thing in the case of data science), but to improve their transparency efforts, deliver minimum value pipelines (MVPs), and continuously improve through iteration. Why is failing fast a positive? While perhaps counterintuitive, failing fast can provide a significant benefit. Data science automation allows technical and business teams to test hypotheses and carry out the entire data science workflow in days. Traditionally, this process is quite lengthy — typically taking months — and is extremely costly. Automation allows failing hypotheses to be tested and eliminated faster. Rapid failure of poor projects provides savings both financially as well as in increased productivity. This rapid try-fail-repeat process also allows businesses to discover useful hypotheses in a more timely manner. Why is white box modelling important? White-box models (WBMs) provide clear explanations of how they behave, how they produce predictions, and what variables influenced the model. WBMs are preferred in many enterprise use cases because of their transparent ‘inner-working’ modeling process and easily interpretable behavior.


Microsoft: Hacking Groups Shift to New Targets

Microsoft notes that, in the last two years, the company has sent out 13,000 notifications to customers who have been targeted by nation-states. The majority of these nation-state attacks originate in Russia, with Iran, China and North Korea also ranking high, according to Microsoft. The U.S. was the most frequent target of these nation-state campaigns, accounting for nearly 70% of the attacks Microsoft tracked, followed by the U.K., Canada, South Korea and Saudi Arabia. And while critical infrastructure remains a tempting target for sophisticated hacking groups backed by governments, Microsoft notes that organizations that are deemed noncritical are increasingly the focus of these campaigns. "In fact, 90% of our nation-state notifications in the past year have been to organizations that do not operate critical infrastructure," Tom Burt, corporate vice president of customer security and trust at Microsoft, writes in a blog post. "Common targets have included nongovernmental organizations, advocacy groups, human rights organizations and think tanks focused on public policy, international affairs or security. This trend may suggest nation-state actors have been targeting those involved in public policy and geopolitics, especially those who might help shape official government policies."


Why Perfect Technology Abstractions Are Sure To Fail

Everything’s an abstraction these days. How many “existential threats” are there? We need “universal” this and that, but let’s not forget that relativism – one of abstraction’s enforcers – is hovering around all the time making things better or worse, depending on the objective of the solution du jour. Take COVID-19, for example. Based upon the assumption that the US knows how to solve “enterprise” problems – the abstract principle at work – the US has done a great job. But relativism kills the abstraction: the US has roughly 4% of the world’s population and 25% of the world’s deaths. How many technology solutions sound good in the abstract, but are relatively ineffective?  The Agile family is an abstract solution to an age-old problem: requirements management and timely cost-effective software applications design and development. But the relative context is way too frequent failure. We’ve been wrestling with requirements validation for decades, which is why the field constantly invented methods, tools and techniques to manage requirements and develop applications, like rapid application development (RAD), rapid prototyping, the Unified Process (UP) and extreme programming (XP), to name a few. 


.NET Framework Connection Pool Limits and the new Azure SDK for .NET

Connection pooling in the .NET framework is controlled by the ServicePointManager class and the most important fact to remember is that the pool, by default, is limited to 2 connections to a particular endpoint (host+port pair) in non-web applications, and to unlimited connection per endpoint in ASP.NET applications that have autoConfig enabled (without autoConfig the limit is set to 10). After the maximum number of connections is reached, HTTP requests will be queued until one of the existing connections becomes available again. Imagine writing a console application that uploads files to Azure Blob Storage. To speed up the process you decided to upload using using 20 parallel threads. The default connection pool limit means that even though you have 20 BlockBlobClient.UploadAsync calls running in parallel only 2 of them would be actually uploading data and the rest would be stuck in the queue. The connection pool is centrally managed on .NET Framework. Every ServiceEndpoint has one or more connection groups and the limit is applied to connections in a connection group.


Digital transformation: The difference between success and failure

Commenting on the survey, Ritam Gandhi, founder and director of Studio Graphene, said: "They say necessity is the mother of invention, and the pandemic is evidence of that. While COVID-19 has put unprecedented strain on businesses, it has also been key to fast-tracking digital innovation across the private sector. "The research shows that the crisis has prompted businesses to break down the cultural barriers which previously stood in the way of experimenting with new digital solutions. This accelerated digital transformation offers a positive outlook for the future -- armed with technology, businesses will now be much better-placed to adapt to any unforeseen challenges that may come their way." Digital transformation, whatever precise form it takes, is built on the internet and so, even in normal times, internet infrastructure needs to be robust. In abnormal times such as the current pandemic, with widespread remote working and increased reliance on online services generally, a resilient internet is vital. So how did it hold up in the first half of 2020?


From Cloud to Cloudlets: A New Approach to Data Processing?

Though the term “cloudlet” is still relatively new (and relatively obscure) the central concept of it is not. Even from the earliest days of cloud computing, it was recognized that sending large amounts of data to the cloud to be processed raises bandwidth issues. Over much of the past decade, this issue has been masked by the relatively small amounts of data that devices have shared with the cloud. Now, however, the limitations of the standard cloud model are becoming all too clear. There is a growing consensus that the growing volume of end-device data to the cloud for processing is too resource-intensive, time-consuming, and inefficient to be processed by large, monolithic clouds. Instead, say some analysts, these data are better processed locally. This processing will either need to take place in the device that is generating these data, or in a semi-local cloud that is interstitial between the device and an organization's central cloud storage. This is what is meant by a "cloudlet”: intelligent device, cloudlet, and cloud.


Align Your Data Architecture with the Strategic Plan

Data collected today impacts business direction and growth for tomorrow. The benefits to having and using data that align with strategic goals include the ability to make evidence-based decisions, which can provide insights on how to reduce costs and increase efficiency of other resource utilization. Data are only valuable when they correlate to a company’s working goals. That means available data should assist in making the most important decisions at the present time. Data-based decision-making also coincides with lower overall costs. Examples of data that should be considered in any data set include digital data, such as web traffic, customer relationship management (CRM) data, email marketing data, customer service data, and third-party data. ... For some data sets, there may not be a need (and therefore the associated costs) for big data processing. Collecting all data that exists, just because it is available, does not guarantee inherent value to the company. Furthermore, data from multiple sources may not be structured and may require heavy lifting on the processing side. Secondly, clearly defined data points, such as demographics, financial background and market trends, will add varying value to any organization and predict the volume of data and processing needed for meaningful optimization.


Information Quality Characteristics

A personal experience involved the development of an initial data warehouse for global financial information. The initial effort was to build a new source of global information that would be more available and would allow senior management to monitor the current month’s progress toward budget goals for gross revenue and other profit and loss (P&L) items. The effort was to build the information from the source systems that feed the process used to develop the P&L statements. To deliver information that would be believable to the senior executives, a stated goal was to match the published P&L information. After a great deal of effort, the initial goal was changed to deliver the capability for gross revenue. This change was necessitated because there was no consistent source data for the other P&L items. Even the new goal proved elusive as the definition for gross revenue varied among the over 75 corporate subsidiaries. Initial attempts to aggregate sales for a subsidiary that matched reported amounts proved to be extremely challenging. The team had to develop a different process to aggregate sales for each subsidiary. Unfortunately, that process was not always successful in matching the published revenue amounts.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - October 01, 2020

Levelling the playing field: 3 tips for women on breaking into tech

Do you worry over work decisions? Do you negatively compare your work to others? Chances are you’ve experienced imposter syndrome. And you’re far from alone — 90% of women in the UK experience it too. As Kim Diep from Trainline mentioned at Code Fest: “No matter what level you are in, in your tech career, I think everyone has some moments of self-doubt where they feel like they’re not good enough.” When you feel insecure, it’s easy to bottle those feelings up and keep your head down. To combat this, step out of your comfort zone and face these insecurities head-on. Remember, you were hired because of skills, talent and experience — not by luck! You don’t have to dive straight into delivering your next company all-hands. However, trying something as simple as active participation in meetings can help boost confidence. ... Whether you’re looking to transition into a tech-based career or have worked in the industry for years, mentors are an invaluable source of wisdom, experience and relationships. Look to your managers for advice — that’s what they are there for. Join webinars or virtual events, ask questions and don’t be afraid to drop someone you admire a friendly LinkedIn note to see if they’d be up for sharing any tips.


Why Every DevOps Team Needs A FinOps Lead

FinOps is the operating model for the cloud. FinOps enables a shift — a combination of systems, best practices, and culture — to increase an organization’s ability to understand cloud costs and make tradeoffs. In the same way that DevOps revolutionized development by breaking down silos and increasing agility, FinOps increases the business value of cloud by bringing together technology, business, and finance professionals with a new set of processes. Simply put, FinOps applies the same principles of DevOps to financial and operational management of cloud assets and infrastructure. Ideally, this means managing those assets through code rather than human interventions. To do this effectively, a FinOps practitioner must understand the patterns of both customer usage and product requirements, and map those correctly to maximize value while continuing to optimize for customer experience. ... When we started our FinOps project, all we had to work with were flat data files that lacked key information. With these flat files, we had no easy means of attributing dollar values to specific projects or research deployments. Needless to say, this was a nightmare.


Three Reasons AI-Powered Platforms Fail

First and foremost, businesses must have a clear idea of exactly what they want to replace with machines. If you shoot for the moon before understanding gravity, you're not going to get very far. When it comes to building AI-powered platforms, you have to build up to solving the big-picture problem by first automating lots of small functions and tasks. Often, businesses automate the wrong things and end up creating technology that is unable to deliver on its promise. Start by studying the industry to understand the most mundane, time-consuming, human-intensive or manual processes of a task or function; focus on areas like repetitive tasks, data entry, common requests, etc. This is where your automation work should begin. It is paramount that the foundational elements of an AI-powered platform are consistently operating with 100% accuracy before moving on to building the next layer of automation. ... It's a given you need to hire strong data scientists and technologists experienced in AI, machine learning and natural language processing, and many businesses are following this protocol: Job postings for AI-related roles grew 14% year over year prior to the Covid-19 outbreak in early March 2020.


Rethinking risk and compliance for the Age of AI

At its core, risk management refers to a company’s ability to identify, monitor and mitigate potential risks, while compliance processes are meant to ensure that it operates within legal, internal and ethical boundaries. These are information-intensive activities – they require collecting, recording and especially processing a significant amount of data and as such are particularly suited for deep learning, the dominant paradigm in AI. Indeed, this statistical technique for classifying patterns – using neural networks with multiple layers – can be effectively leveraged for improving analytical capabilities in risk management and compliance. ... early experience shows that AI can create new types of risks for businesses. In hiring and credit, AI may amplify historical bias against female and minority background applicants, while in healthcare it may lead to opaque decisions because of its black box problem, to name just a few. These risks are amplified by the inherent complexity of deep learning models which may contain hundreds of millions of parameters. This encourages companies to procure third-party vendors’ solutions about which they know little of the inner functioning.

An introduction to web application firewalls for Linux sysadmins

Much like "normal" firewalls, a WAF is expected to block certain types of traffic. To do this, you have to provide the WAF with a list of what to block. As a result, early WAF products are very similar to other products such as anti-virus software, IDS/IPS products, and others. This is what is known as signature-based detection. Signatures typically identify a specific characteristic of an HTTP packet that you want to allow or deny. ... Signatures work pretty well but require a lot of maintenance to ensure that false positives are kept to a minimum. Additionally, writing signatures is often more of an art form rather than a straightforward programming task. And signature writing can be quite complicated as well. You're often trying to match a general attack pattern without also matching legitimate traffic. To be blunt, this can be pretty nerve-racking. ... In the brave new world of dynamic rulesets, WAFs use more intelligent approaches to identifying good and bad traffic. One of the "easier" methods employed is to put the WAF in "learning" mode so it can monitor the traffic flowing to and from the protected web server. The objective here is to "train" the WAF to identify what good traffic looks like. 


Cryptojacking: The Unseen Threat

The reasons around why cryptojacking is more prolific is threefold: It doesn't require elevated permissions, it is platform agnostic, and it rarely sets off antivirus triggers. In addition, the code is often small enough to insert surreptitiously into open source libraries and dependencies that other platforms rely on. It can also be configured to throttle based on the device, as well as use a flavor of encrypted DNS, in order not to arouse suspicions. Cryptojacking can also be built for almost any context and in various languages such as JavaScript, Go, Ruby, Shell, Python, PowerShell, etc. As long as the malware can run local commands, it can utilize CPU processing power and start mining cryptocurrency. In addition to entire systems, cryptominers can thrive in small workhorse environments, such as Docker containers, Kubernetes clusters, and mobile devices, or leverage misconfigured cloud instances and overpermissioned accounts. The possibilities are endless. ... In addition to the huge number of targets, corporate data breaches are heavily underreported because laws vary by jurisdiction on when a company is required to report a breach.


Speeding up HTTPS and HTTP/3 negotiation with... DNS

The fundamental problem comes from the fact that negotiation of HTTP-related parameters (such as whether HTTPS or HTTP/3 can be used) is done through HTTP itself (either via a redirect, HSTS and/or Alt-Svc headers). This leads to a chicken and egg problem where the client needs to use the most basic HTTP configuration that has the best chance of succeeding for the initial request. In most cases this means using plaintext HTTP/1.1. Only after it learns of parameters can it change its configuration for the following requests. But before the browser can even attempt to connect to the website, it first needs to resolve the website’s domain to an IP address via DNS. This presents an opportunity: what if additional information required to establish a connection could be provided, in addition to IP addresses, with DNS? That’s what we’re excited to be announcing today: Cloudflare has rolled out initial support for HTTPS records to our edge network. Cloudflare’s DNS servers will now automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone. The new proposal, currently discussed by the Internet Engineering Task Force (IETF) defines a family of DNS resource record types (“SVCB”) that can be used to negotiate parameters for a variety of application protocols.


Microsoft Issues Updated Patching Directions for 'Zerologon'

Microsoft issued a four-step plan to protect a user's environment and prevent outages: Update domain controllers with a patch released Aug. 11 or later; Find devices that are making vulnerable connections by monitoring event logs; Address noncompliant devices making vulnerable connections; and Enable enforcement mode to address CVE-2020-1472 in your environment. Microsoft issued the first phase of the patch on Aug. 11 to partially mitigate the vulnerability. It plans to issue a second patch Feb. 9, 2021, which will handle the enforcement phase of the update. "The [domain controllers] will now be in enforcement mode regardless of the enforcement mode registry key," Microsoft says. "This requires all Windows and non-Windows devices to use secure [Remote Procedure Call] with Netlogon secure channel or explicitly allow the account by adding an exception for the non-compliant device." ... "An elevation of privilege vulnerability exists when an attacker establishes a vulnerable Netlogon secure channel connection to a domain controller, using the Netlogon Remote Protocol (MS-NRPC). An attacker who successfully exploited the vulnerability could run a specially crafted application on a device on the network," Microsoft says.


War of the AI algorithms: the next evolution of cyber attacks

Over the years, hackers have consistently reinforced the old adage: ‘where there’s a will there’s a way’. Defenders have inputted new rules into their firewalls or developed new detection signatures based on attacks they have seen, and hackers have constantly reoriented their attack methodologies to evade them, leaving organisations playing catch-up and scrambling for a plan B in the face of an attack. A paradigm shift came in 2017 when the destructive ransomware ‘worms’ WannaCry and NotPetya caught the security world unaware, bypassing traditional tools like firewalls to cripple thousands of organisations across 150 countries, including a number of NHS agencies. A critical response to the onset of increasingly sophisticated and novel attacks has been AI-powered defences, a development driven by the philosophy that information about yesterday’s attacks cannot predict tomorrow’s threats. In recent years, thousands of organisations have embraced AI to understand what is ‘normal’ for their digital environment and identify behaviour that is anomalous and potentially threatening. Many have even entrusted machine algorithms to autonomously interrupt fast-moving attacks. This active, defensive use of AI has changed the role of security teams fundamentally, freeing up humans to focus on higher level tasks.


The biggest cyber threats organizations deal with today

“Ransomware criminals are intimately familiar with systems management concepts and the struggles IT departments face. Attack patterns demonstrate that cybercriminals know when there will be change freezes, such as holidays, that will impact an organization’s ability to make changes (such as patching) to harden their networks,” Microsoft explained. “They’re aware of when there are business needs that will make businesses more willing to pay ransoms than take downtime, such as during billing cycles in the health, finance, and legal industries. Targeting networks where critical work was needed during the COVID-19 pandemic, and also specifically attacking remote access devices during a time when unprecedented numbers of people were working remotely, are examples of this level of knowledge.” Some of them have even shortened their in-network dwell time before deploying the ransomware, going from initial entry to ransoming the entire network in less than 45 minutes. Gerrit Lansing, Field CTO, Stealthbits, commented that the speed at which a targeted ransomware attack can happen is really determined by one thing: how quickly an adversary can compromise administrative privileges in Microsoft Active Directory.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - September 30, 2020

Zerologon Attacks Against Microsoft DCs Snowball in a Week

“This flaw allows attackers to impersonate any computer, including the domain controller itself and gain access to domain admin credentials,” added Cisco Talos, in a writeup on Monday. “The vulnerability stems from a flaw in a cryptographic authentication scheme used by the Netlogon Remote Protocol which — among other things — can be used to update computer passwords by forging an authentication token for specific Netlogon functionality.” ... Microsoft’s patch process for Zerologon is a phased, two-part rollout. The initial patch for the vulnerability was issued as part of the computing giant’s August 11 Patch Tuesday security updates, which addresses the security issue in Active Directory domains and trusts, as well as Windows devices. However, to fully mitigate the security issue for third-party devices, users will need to not only update their domain controllers, but also enable “enforcement mode.” They should also monitor event logs to find out which devices are making vulnerable connections and address non-compliant devices, according to Microsoft. “Starting February 2021, enforcement mode will be enabled on all Windows Domain Controllers and will block vulnerable connections from non-compliant devices,” it said.


Programming languages: Java founder James Gosling reveals more on Java and Android

Object-oriented programming was also an important concept for Java, according to Gosling. "One of the things you get out of object-oriented programming is a strict methodology about what are the interfaces between things and being really clear about how parts relate to each other." This helps address situations when a developer tries to "sneak around the side" and breaks code for another user.  He admits he upset some people by preventing developers from using backdoors. It was a "social engineering" thing, but says people discovered that restriction made a difference when building large, complex pieces of software with lots of contributors across multiple organizations. It gave these teams clarity about how that stuff gets structured and "saves your life". He offered a brief criticism of former Android boss Andy Rubin's handling of Java in the development of Android. Gosling in 2011 had a brief stint at Google following Oracle's acquisition of Sun. Oracle's lawsuit against Google over its use of Java APIs is still not fully settled after a decade of court hearings.  "I'm happy that [Google] did it," Gosling said, referring to its use of Java in Android. "Java had been running on cell phones for quite a few years and it worked really, really well. ..."


Prepare Your Infrastructure and Organization for DevOps With Infrastructure-as-Code

To understand infrastructure as code better, let’s look at what happened when cars became ubiquitous here in the US. Before cars, the railroad system ruled it all. Trains running on extremely well-defined, regimented schedules carried passengers and goods, connected people and places using the mesh of railroads that crisscrossed the country1. Cars democratized transport, allowing us to use our own vehicles on schedules convenient to us. To support this, a rich ecosystem of gas stations, coffee shops, restaurants and rest areas cropped up everywhere as a support system. Most importantly, the investment in the US road system paved the way (pun intended) for a network of freeways, highways and city roads that now carry a staggering 4 trillion passenger-miles of traffic each year, compared to a meager 37 billion passenger-miles carried by railroads2. We are in the midst of a similar revolution in application architectures. Applications are evolving from the railroad mode (monolithic architectures deployed and managed in centralized, regimented ways, following a waterfall model of project management), to the road system mode (micro-services architectures with highly interconnected components, deployed and managed by small teams following DevOps practices).


The lifecycle of a eureka moment in cybersecurity

The cybersecurity industry is saturated with features passing themselves off as platforms. While the accumulated value of a solution’s features may be high, its core value must resonate with customers above all else. More pitches than I wish to count have left me scratching my head over a proposed solution’s ultimate purpose. Product pitches must lead with and focus on the solution’s core value proposition, and this proposition must be able to hold its own and sell itself. Consider a browser security plugin with extensive features that include XSS mitigation, malicious website blocking, employee activity logging and download inspections. This product proposition may be built on many nice-to-have features, but, without a strong core feature, it doesn’t add up to a strong product that customers will be willing to buy. Add-on features, should they need to be discussed, ought to be mentioned as secondary or additional points of value. Solutions must be scalable in order to reach as many customers as possible and avoid price hikes with reduced margins. Moreover, it’s critical to factor in the maintenance cost and “tech debt” of solutions that are environment-dependent on account of integrations with other tools or difficult deployments.


Why data security has never been more important for healthcare organisations

The first step is to adopt a ‘zero-trust approach’, meaning that every single access request by a user should require their identity to be appropriately verified. Of course, to avoid users having to enter their username/password over and over again, this approach should be risk-weighted so that less important access requires less interventionist verification, for instance, using contextual signals like the location of the user or device characteristics. There is no longer a trade-off to be made between security and convenience – access to data and systems can be easy, simple and safe. This approach allows an organisation to always answer yes to: “Am I appropriately sure this person is who they say they are?” It is a philosophy which should be applied to internal and external users: a crucial fact given healthcare data’s risk profile. The second step for healthcare organisations is to consider eliminating the standard username/password authentication method and embrace modern, intelligent authentication. This delivers a combination of real-time context-based authentication and authorisation that seamlessly provide the appropriate level of friction based on the actions being taken by a service user.


Do You Need a Chief Data Scientist?

The specific role that a Chief Data Scientist plays depends on how the organization is applying data science, and where it falls on the build-versus-buy spectrum. Here, it’s important to differentiate between an organization that is creating a for-sale product or service that includes machine learning as a core feature, or whether it’s looking to use machine learning or data science capabilities for a product or service that’s used internally. Anodot, which creates and sells software that uses machine learning models to analyzing time-series data, is a good example of an organization building an external product with machine learning as a core feature. Cohen leads a team of data scientist in building all of the machine learning capabilities that are available in the Anodot product. On the other hand, there are organizations that are using machine learning capabilities to create a product that is used internally, or for data science services. In these types of organizations, the Chief Data Scientist, with her deep experience, is best equipped to answer these tough questions, Cohen says. “I think companies should build it themselves if they’re going to sell it, or if it’s a mission critical application,” Cohen says. “But it has to be mission critical. Otherwise, why bother?”


Should you upgrade tape drives to the latest standard?

There are three reasons that could justify upgrading your tape drive. The first would be if you have a task that uses large amounts of tape on a regular basis and upgrading to a faster tape drive would increase the speed of that process. For example, it might make sense for a movie producer using cameras that produce petabytes of data a day who want to create multiple copies and send them to several post-production companies. Copying 1PB to tape takes 22 hours at LTO-7 speeds, and LTO-9 would roughly halve that time. (The three companies behind the standard have not advertised the speed part of the spec yet, but it should be somewhere around 1200-1400 MB/s.) If the difference between 22 and 11 hours changes your business, then by all means upgrade to LTO-9. Second, LTO-9 offers a 50% capacity increase over LTO-8 and a 200% capacity increase over LTO-7. If you are currently paying by the tape for shipping your tapes or storing them in a vault, a financial argument could be made for upgrading to LTO-9 and copying all of your existing tapes to newer, bigger tapes. You might be able to significantly reduce those monthly costs if you’re using LTO-8 tapes and reduce them even more if you’re using LTO-7.


Archive as a service: What you need to know

Before the advent of cloud service providers, magnetic tapes primarily stored archive data in environmentally clean and physically secure facilities, such as those still offered by companies like Iron Mountain. As time progressed, organizations also stored archived data on rotating hard drives, fiber optic storage and solid-state disks. Of great importance to IT managers is the cost for data storage, and the good news is that advances in storage technology -- especially, as provided by cloud-based data archiving companies, as well as collocation-based archiving providers -- have helped reduce the cost for archival storage. ... Your organization should establish ground rules in its use of archive as a service for what gets stored, where storage occurs, how data is stored, the duration of storage and special data requirements such as deduplication and formatting. Perform the necessary due diligence to ensure that you can securely transmit your data to the archive location. Also, make sure the archiving provider can encrypt the data in transit and at rest, and ensure the storage location is fully secure and can minimize unauthorized access to archived data. You must carefully research key parameters -- data transmission media, data security capabilities, data integrity and data protection resources -- for all potential third-party vendors.


Three Steps To Manage Third-party Risk In Times Of Disruption

After a risk assessment has been carried out, organisations must ensure that a risk strategy is built into all service-level agreements and constantly monitor their third-party partners for new risks that may arise, including further down the supply chain. This includes monitoring the third-party’s performance metrics and internal control environment and collecting any relevant supporting documentation on an ongoing basis. In doing so, such information can inform risk strategy across the business and help companies identify issues before they arise. By monitoring these relationships on an ongoing basis, IT teams have wider visibility into the risk landscape and can minimise the likelihood of issues down the line. ... If a large number of third parties are used by the company, it can be hard for IT teams to keep track. Third-party relationships are often managed in silos across different areas of the business, each of which may have a unique way of identifying and managing them. This makes it increasingly difficult for management teams to get an accurate overview of third-party risk and performance across the business. 


Java is changing in a responsible manner

The world around us is changing. You know, the first thing that got me excited about Java was applets. We did not even know that Java would thrive on the server side; that came much later. But today we are in a very different world. Back then, we did not have big data, we didn’t have smart devices, we didn’t have functions as a service, and we didn’t have microservices. If Java didn’t adapt to the new world, it would have gone extinct. I started with Java fairly early on, and it’s absolutely phenomenal and refreshing to know that I am now programming with the next generation of programmers. The desires and needs and expectations of the next generation are not the same as those of my generation. Java has to cater to the next generation of programmers. This is a perfect storm for the language: On one hand, Java is popular today. On the other hand, Java must stay relevant to the changing business environment, changing technology, and changing user base. And we are going to make this possible. After 25 years, Java is not the same Java. It’s a different Java, and that’s what excites me about it.



Quote for the day:

"Enthusiasm is the greatest asset in the world. It beats money, power and influence." -- Henry Chester

Daily Tech Digest - September 29, 2020

The rise of remote work can be unexpectedly liberating

Employees could become increasingly mercenary, no longer swayed by the strong social bonds and physical-world perks of the office of the past. For their part, employers could increasingly view their staffs as little more than interchangeable work units. As a manager, no matter how objective I think I may be, I would probably find it easier to fire an employee with whom I had little personal connection. That difficult conversation would be reduced to a few minutes on a screen, with no chance of running into the person later in the coffee room. All of this may sound dismal, but this change in employee psychology and loyalty may come with an unexpected liberation, encouraging workers to look beyond the workplace to build friendships and identity. In our previous office lives, some of us had access to free food, coffee rooms or other on-site perks. We might have enjoyed them, but they also helped keep us in the office for long hours. Likewise, the presence of co-workers and bosses made us more compliant, less likely to take a proper lunch hour or make the effort to attend a child’s school event. With our offices gone, our days have now opened up. Why not make that doctor’s appointment for 4 p.m.? Why not pick the kids up at day care rather than find a babysitter?


Hardware security: Emerging attacks and protection mechanisms

Every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says. She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware. “Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.” Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.


Still not dead: The mainframe hangs on, sustained by Linux and hybrid cloud

Others say technologies such as machine learning and artificial intelligence will also drive future mainframe development. “Data insights help drive actionable and profitable results—-but the pool of data is growing at astronomical rates. That’s where AI can make a difference, especially when it’s on a mainframe. Consider the amount of data that resides on a mainframe for an organization in the banking, manufacturing, healthcare, or insurance sectors. You’d never be able to make sense of it all without AI,” said Deloitte’s Cobb. As an example, Cobb said core banking operations can do more than simply execute large volumes of transactions. “Banks need deep insights about customer needs, preferences, and intentions to compete effectively, along with speed and agility in sharing and acting on those insights. That’s easier said than done when data is constantly changing. Now if you can analyze data directly on the mainframe, you can get near real-time insights and action. That makes the mainframe an important participant in the AI/ML revolution,” Cobb said.The mainframe environment isn’t without challenges going forward.


How AI can transform finance departments to help Covid-19 recovery

The modern world has made company spending less centralised than ever before, with employees spending money across so many expense categories and using more payment methods than ever before. This growth in the volume of financial data leads to an increase in the risk of fraud and noncompliance. This is a risk few businesses can take, especially when cash flow needs to be conserved. A study by the Association of Certified Fraud Examiners (ACFE) found that the average organisation loses 5% of its annual revenue to internal fraud. During an economic downturn, this is simply unsustainable. Much of this is accidental, with employees often mistakenly duplicating expense claims or invoices. Businesses are only able to audit around 10% of expense reports manually, so much potential fraud goes undetected. AI provides a solution to this problem, enabling the auditing of every single spend report. It can predict patterns and detect any anomalies that appear in financial data. Covid-19 has made it more important than ever that businesses are identifying any fraudulent activity and preventing it. Invoice fraud is one example that has seen an increase during the pandemic. 


Universal Health Services' IT Network Crippled

According to a post on Reddit by an individual who claims to work at a UHS facility in the Southeastern U.S., on Sunday at approximately 2 a.m., systems in the facility's emergency department "just began shutting down." The individual says: "I was sitting at my computer charting when all of this started. It was surreal and definitely seemed to propagate over the network. All machines in my department are Dell Win10 boxes." Anti-virus programs were disabled by the attack, and hard drives "just lit up with activity," the individual writes. "After one minute or so of this, the computers logged out and shutdown. When you try to power back on the computers they automatically just shut down. We have no access to anything computer based including old labs, EKGs, or radiology studies. We have no access to our PACS radiology system." Media outlet Bleeping Computer reports that an UHS insider says that during the incident, files were being renamed to include the .ryk extension. This extension is used by the Ryuk ransomware. Likewise, citing "people familiar with the incident," the Wall Street Journal reports that the attack did indeed involve ransomware.


The Shared Irresponsibility Model in the Cloud Is Putting You at Risk

The Shared Responsibility Model is pretty well understood now to mean: "If you configure, architect, or code it, you own the responsibility for doing that properly." While the relationship between the customer and the cloud is well understood, our experience working with software teams indicates the organization and architectural security responsibilities within organizations are not. And that is where the Shared Irresponsibility Model comes into play. When something goes wrong in the cloud — some form of security issue or incident —corporate management inevitably will come looking for the most senior person in the IT organization to blame. The IT organization and development teams might not have gone line by line through the various cloud providers' Shared Responsibility Models to entirely understand what is and isn't something they have to deal with. Developers are focused on developing and getting code running, typically with high rates of change. With the cloud, pushing code into production doesn't have many hurdles. The cloud provider is not responsible for an organization's own compliance, and, by default, it typically will not alert on misconfigurations that could introduce risk, either. 


Identity theft explained: Why businesses make tempting targets

Identity theft is most often associated with the act of stealing an individual's identity. But as Mitt Romney once famously said, "corporations are people, my friend," and businesses have all the sorts of "personal" data — tax ID numbers and bank accounts, for instance — that individuals have, which can be stolen and abused. We're not talking about security breaches or employees misusing corporate assets here; we're talking about an identity thief pretending to be someone within a company who has the authority to make financial transactions, just like they might pretend to be another individual. In fact, a business may be an even more tempting target for an identity thief than an individual because businesses have high credit limits, substantial bank accounts, and make big payments to vendors on a regular basis. The consequences can be dire, particularly for small businesses where the founder's or owner's finances are deeply entangled with the company's. Before we move on, we should take note of a couple of ways that even the theft of individuals' identities can affect businesses. For instance, one of the most pernicious effects of identity theft is just how much time victims have to spend calling credit agencies and financial institutions to resolve the issue; a recent study found that victims can take up to 175 hours to set everything straight


Using Nginx to Customize Control of Your Hosted App

Nginx is an open-source web server that is a world leader in load balancing and traffic proxying. It comes with a plethora of plugins and capabilities that can customize an application’s behavior using a lightweight and easy-to-understand package. According to Netcraft and W3Techs, Nginx serves approximately 31-36% of active websites, putting it neck and neck with Apache as the world’s preferred web server. This means that not only is it well-respected, trusted, performant enough for a large portion of production systems, and compatible with just about any architecture, it also has a loyal following of engineers and developers supporting the project. These are key factors in considering the longevity of your application, how portable it can be, and where it can be hosted. Let's look at a situation when you might need Nginx. In our example, you've created an app and deployed it on a Platform as a Service (PaaS)—in our case, Heroku. With PaaS, your life is easier, as decisions about the infrastructure, monitoring, and supportability have already been made for you, guaranteeing a clean environment for you to run your applications with ease.


The future of retail isn’t what it used to be

Appointment-based shopping is one key area of immediate opportunity. Initially seen in luxury and higher-end stores, appointment-based shopping balances safety, capacity, and personalized service. It can also serve two needs at once. For example, Best Buy uses appointments for more guided shopping with an advisor. For clothing retailers, appointment-based shopping can help customers schedule dressing room visits with the specific items they want to try. With the right digital capabilities, consumers can shop online, select items in various sizes, and schedule a time and room to visit a retailer to experience a personalized trial and fitting. Making the in-store shopping experience better should include planograms and the ability to look up assortments and stock in a store. Assortment differences from store-to-store mean that shoppers may go into a store looking for a product that a particular location does not stock. Home Depot and Target both do well in indicating if a product is in stock and where it’s located within the store. Contactless shopping is another area worth further focus. Self-checkout in retail has been available and increasing its footprint for some time.


Microsoft: Some ransomware attacks take less than 45 minutes

Per Microsoft, the most targeted accounts in BEC scams were the ones for C-suites and accounting and payroll employees. But Microsoft also says that phishing isn't the only way into these accounts. Hackers are also starting to adopt password reuse and password spray attacks against legacy email protocols such as IMAP and SMTP. These attacks have been particularly popular in recent months as it allows attackers to also bypass multi-factor authentication (MFA) solutions, as logging in via IMAP and SMTP doesn't support this feature. Furthermore, Microsoft says it's also seeing cybercrime groups that are increasingly abusing public cloud-based services to store artifacts used in their attacks, rather than using their own servers. Further, groups are also changing domains and servers much faster nowadays, primarily to avoid detection and remain under the radar. But, by far, the most disruptive cybercrime threat of the past year have been ransomware gangs. Microsoft said that ransomware infections had been the most common reason behind the company's incident response (IR) engagements from October 2019 through July 2020.



Quote for the day:

"Leadership is unlocking people's potential to become better." -- Bill Bradley

Daily Tech Digest - September 28, 2020

5 ways agile devops teams can support IT service desks

Devops teams should specifically tailor planning, release, and deployment communications or collaborations to their audiences. For service desk and customer support teams, communications should focus on how the release impacts end-users. Devops teams should also anticipate the impact of changes on end-users and educate support teams. When an application’s user experience or workflow changes significantly, bringing in support teams early to review, understand, and experience the changes themselves can help them update support processes. ... Let’s consider two scenarios. One devops team monitors their multicloud environments and knows when servers, storage, networks, and containers experience issues. They’ve centralized application logs but have not configured reports or alerts from them, nor have they set up any application monitors. More often then not, when an incident or issue impacts end-users, it’s the service desk and support teams who escalate the issue to IT ops, SREs (site reliability engineers), or the devops team. That’s not a good situation, but neither is the other extreme when IT operational teams configure too many systems and application alerts.


Safeguarding Schools Against RDP-Based Ransomware

Most school districts now acknowledge that things will not be back to normal this fall, and they are planning hybrid learning solutions for the school year. Hackers are delighted with this development since distance learning is often implemented using Microsoft's Remote Desktop Protocol (RDP), one of the prime targets for cybercriminals, aiming for quick gains. Their primary tactic: install ransomware that locks up data until ransoms are paid. Recently, in June 2020, the University of California San Francisco School of Medicine paid a ransom of over $1 million to regain access to important scientific data. While a K-12 school or school district may not have data worth millions, cybercriminals know that schools often lack the resources large corporations deploy to guard against cyberattacks, which makes them prime targets. One specific attack vector the FBI has warned about is Ryuk ransomware, which is deployed via RDP endpoints, specifically students, parents, and teachers in the K-12 environment. Ryuk uses a sophisticated type of data encryption that targets backup files. Once the end user has been infected, that person can propagate the virus to the school's servers, where it can cause havoc.


Arm swimming in a sea of uncertainty that could sink its business model

"The risk with Arm going forward is Arm works because I can source Arm IP, and I know that Arm will not compete with me. Some of Arm's other customers might compete with me, but my supplier will not compete with me because they do not sell chips," he said. "We're moving to a scenario now where there's a potential that if I'm sourcing IP from a company that will compete with me for product -- the selling of chips -- that's obviously going to cause concern for quite a few companies that may also raise antitrust or anti-competitive issues in terms of closing the deal as well." And this is before the situation with Arm China enters the equation. Arm China is a joint venture -- the style of arrangement many western companies enter into to do business in the Middle Kingdom -- and in July, Arm sought to fire the CEO of that venture, Allen Wu, for running another company that invested in Chinese Arm customers on the side. That would normally be a pretty straight forward case of conflict of interest, except Wu has Arm China's registration documents and company seal and he has not given them up, Bloomberg reported in July. Arm China also posted a public letter signed by 176 of its employees imploring Beijing to protect it from the UK parent company.


Why You Should Stop Saving Photos From iMessage, WhatsApp And Android Messages

Check Point’s POC attack was that an image would be messaged to a victim over a popular platform—iMessage, Android Messages or WhatsApp, and the content of the image would tempt the victim to save the photo to their device. It’s easily done—most of us do it all the time, even if just to share the image on a different platform, rather than forward the message we have received. Check Point’s Ekram Ahmed told me that this should serve as a warning. “Think twice before you save photos onto your device,” he told me, “as they can be a Trojan horse for hackers to invade your phone. We demonstrated this with Instagram, but the vulnerability can likely be found in other applications.” That’s almost certainly the case—the issue was with the deployment of an open-source image parsing capability buried within the Instagram app. And that third-party software library is widely installed in countless other apps. ... The issue comes when you save that to the album on your internal phone’s storage or an external disk. We saw this last year, with WhatsApp and Telegram exposed to an Android vulnerability where images were saved to an external disk. That said, earlier this year, Google’s Project Zero team warned that the image handling by messengers themselves on iOS could be defeated when an unusual file type was handled.


Why Data Intensity Matters in Today’s World

Data intensity won’t happen overnight. It’s a journey that brings together the right technology, best practices, and infrastructure foundation. The first step is to start with proven available technologies. Open Source offerings may tempt us with the latest technical bells and whistles, but they aren’t always the solution that aligns best with our business objectives. One reason that IT projects fail so often is that people choose the wrong technology. As you evaluate the tooling you will use with your data, consider whether you need some of the scale and complexity that comes with these technologies. Not every company is a Facebook or a Google. Choose the technology that lines up best to your own use case and your platform, not merely the flavor of the month. Don’t be afraid to purchase the technology and tools you need, rather than build it yourself. Maximizing data literacy is another key step toward data intensity. It starts with establishing a common way to talk about data, using a baseline set of knowledge, such as SQL. Understanding the data is more important than understanding the technology behind it.  Even the best solution won’t do you any good if you can’t bring it into production.


GCA releases new version of the GCA Cybersecurity Toolkit for SMBs

The GCA toolkit provides small businesses a way to address these risks with free tools and resources that they can implement themselves. For government and industry, the toolkit is a valuable resource that can be provided to help secure their supply chain and vendors. “Helping small businesses address cybersecurity challenges requires that we meet them where they are, with resources designed to match their resources and expertise. We worked with partners and stakeholders to develop the GCA Cybersecurity Toolkit for Small Business more than a year ago and since that time have evolved the toolkit to be even easier to use, either all at once or a step at a time,” said Philip Reitinger, GCA’s President and CEO. “This revision of the toolkit is a significant step forward on this front, and we are pleased to share it to further assist small businesses reduce cyber risk.” Since its initial launch there have been more than 105,000 visits to the toolkit. Key to the success of the toolkit has been partnerships with organizations such as Mastercard, ICTswitzerland, and the Swiss Academy of Engineering Sciences (SATW), the latter two of which resulted in the German translation of the toolkit and makes an important contribution to the implementation of the National strategy for Switzerland’s protection against cyber risks (NCS).


7 low-code platforms developers should know

Low-code platforms are far more open and extensible today, and most have APIs and other ways to extend and integrate with the platform. They provide different capabilities around the software development lifecycle from planning applications through deployment and monitoring, and many also interface with automated testing and devops platforms. Low-code platforms have different hosting options, including proprietary managed clouds, public cloud hosting options, and data center deployments. Some low-code platforms are code generators, while others generate models. Some are more SaaS-like and do not expose their configurations. Low-code platforms also serve different development paradigms. Some target developers and enable rapid development, integration, and automation. Others target both software development professionals and citizen developers with tools to collaborate and rapidly develop applications.  I selected the seven platforms profiled here because many have been delivering low-code solutions for over a decade, growing their customer bases, adding capabilities, and offering expanded integration, hosting, and extensibility options. Many are featured in Forrester, Gartner, and other analyst reports on low-code platforms for developers and citizen development.


9 Tips to Prepare for the Future of Cloud & Network Security

Discussions of cloud security are often complicated because different people have different ideas of what constitutes cloud computing and what their personal roles and interests are, Riley said. It's incumbent on organizations to focus their attention on aspects of cloud security they can control: identity permissions, data configuration, and sometimes application code. Most cloud security issues that organizations face fall under these three areas. "The volume of cloud usage is increasing, the sophistication is increasing, the complexity is increasing, [and] the challenge is learning how to better utilize the public cloud," Riley said. A growing dependence on the cloud will also force businesses to rethink the way they approach network security, said Lawrence Orans, research vice president at Gartner, in a session on the subject. The future of network security is in the cloud, and security teams must keep up. The changes related to cloud adoption extend to the security operations center, which analysts anticipate will take a different form as more businesses depend on the cloud, adopt cloud security tools, and support fully remote teams. These shifts will demand a change in thinking for security operations teams.


How Centralized Log Management Can Save Your Company

Dropping all logs into a SIEM spikes costs, so oftentimes only a portion is collected, which creates fragmented or incomplete pictures and impacts security monitoring and incident response. CLMs lift the burden of having to hire staff, provide training and support for SIEMs. CLMs also reduce the costs organizations would incur with their SIEM providers, as well as the risk of endangering the SIEM infrastructure by storing unmanaged logs. Fragmented data collection can become a unified data collection with a data highway. Organizations can now filter unruly data and deliver only what you need. This helps overcome the age-old strategy of letting separate teams have their own sources of data, which could instead be directed to the appropriate team via your data highway. The data highway lets you collect once and use it many times, where it’s needed. ... One example of superfluous information is the timed mark that many applications add into the log of their system to show they are online. Unless a security auditor will need to see this, there is no reason why an organization should be paying to store it in their SIEM. Administrators are even able to filter out all extraneous text and add parsing for specific events.


Applying Chaos Engineering in Healthcare: Getting Started with Sensitive Workloads

With critical systems, it can be a good idea to first run experiments in your dev/test type environments to minimize both actual and perceived risk. As you learn new things from these early experiments, you can explain to stakeholders that production is a larger and more complex environment which would further benefit from this practice. Equally, before introducing something like this in production, you want to be confident that you can have a safe approach that allows for you to be surprised with newer findings without introducing that additional risk. As a next step, consider running chaos experiments in a new production environment before it is handling live traffic by generating synthetic workloads. You get the benefit of starting to test some of the boundaries of the system in its production configuration, and it is easy for other stakeholders to understand how this will be applied and that it will not introduce added risks to customers, since live traffic isn’t being handled yet. To start introducing more realistic workloads than you can get from synthetic traffic, a next step may be to leverage your existing production traffic.



Quote for the day:

"Challenges in life always seek leaders and leaders seek challenges." -- Wayde Goodall