Daily Tech Digest - September 20, 2019

Digitalization: Welcome to the City 4.0

Digitalization: Welcome to the City 4.0
Applied to cities, digitalization can not only improve efficiency by minimizing the waste of time and resources, but it will simultaneously improve a city’s productivity, secure growth, and drive economic activities. The Finnish capital of Helsinki is currently in the process of proving this. An early adopter of smart city technology and modeling, it launched the Helsinki 3D+ project to create a three-dimensional representation of the city using reality capture technology provided by the software company Bentley Systems for geocoordination, evaluation of options, modeling, and visualization.  The project’s aim is to improve the city’s internal services and processes and provide data for further smart city development. Upon completion, Helsinki’s 3-D city model will be shared as open data to encourage commercial and academic research and development. Thanks to the available data and analytics, the city will be able to drive its green agenda in a way that is much more focused on sustainable consumption of natural resources and a healthy environment.



How to decommission a data center

bulldozer wrecking ball deconstruct tear down decommission data center
"They need to know what they have. That’s the most basic. What equipment do you have? What apps live on what device? And what data lives on each device?” says Ralph Schwarzbach, who worked as a security and decommissioning expert with Verisign and Symantec before retiring. All that information should be in a configuration management database (CMDB), which serves as a repository for configuration data pertaining to physical and virtual IT assets. A CMDB “is a popular tool, but having the tool and processes in place to maintain data accuracy are two distinct things," Schwarzbach says. A CMDB is a necessity for asset inventory, but “any good CMDB is only as good as the data you put in it,” says Al DeRose, a senior IT director responsible for infrastructure design, implementation and management at a large media firm. “If your asset management department is very good at entering data, your CMDB is great. [In] my experience, smaller companies will do a better job of assets. Larger companies, because of the breadth of their space, aren’t so good at knowing what their assets are, but they are getting better.”


The Problem With “Cloud Native”

Digital data cloud, futuristic cloud with blockchain technology
The problem is thinking about and creating a common understanding around a change that big. Here the industry does itself no favors. For years, many people thought cloud technology was somehow part of the atmosphere itself. In reality, few things are so very physical: Big public cloud computing vendors like Amazon Web Services, Microsoft Azure, and Google Cloud each operate globe-spanning systems, with millions of computer servers connected by hundreds of thousands of miles of fiber-optic cable. Most people now know the basics of cloud computing, but understanding it remains a problem. Take a current popular term, “cloud native.” Information technologists use it to describe strategies, people, teams, and companies that “get” the cloud, and they use it for maximum utility. Others use it to describe an approach to building, deploying, and managing things in a cloud computing environment. People differ. Whether it’s referring to people or software, “cloud native” is shorthand for operating with the fullest power of the cloud.


Why You Need a Cyber Hygiene Program

cyber hygiene
Well-known campaigns and breaches either begin or are accelerated by breakdowns in the most mundane areas of security and system management. Unpatched systems, misconfigured protections, overprivileged accounts and pervasively interconnected internal networks all make the initial intrusion easier and make the lateral spread of an attack almost inevitable. I use the phrase “cyber hygiene” to describe the simple but overlooked security housekeeping that ensures visibility across the organization’s estate, that highlights latent vulnerability in unpatched systems and that encourages periodic review of network topologies and account or role permissions. These are not complex security tasks like threat hunting or forensic root cause analysis; they are simple, administrative functions that can provide value far in excess of more expensive and intrusive later-stage security investments. ... The execution of the most cyber hygiene falls squarely on the shoulders of the IT, network and support teams.


A Beginner's Guide to Microsegmentation

Image: knssr via Adobe Stock
Security experts overwhelmingly agree that visibility issues are the biggest obstacles that stand in the way of successful microsegmentation deployments. The more granular segments are broken down, the better the IT organization need to understand exactly how data flows and how systems, applications, and services communicate with one another. "You not only need to know what flows are going through your route gateways, but you also need to see down to the individual host, whether physical or virtualized," says Jarrod Stenberg, director and chief information security architect at Entrust Datacard. "You must have the infrastructure and tooling in place to get this information, or your implementation is likely to fail." This is why any successful microsegmentation needs to start with a thorough discovery and mapping process. As a part of that, organizations should either dig up or develop thorough documentation of their applications, says Stenberg, who explains that documentation will be needed to support all future microsegmentation policy decisions to ensure the app keeps working the way it is supposed to function.


Cryptoming Botnet Smominru Returns With a Vengeance

Cryptoming Botnet Smominru Returns With a Vengeance
Smominru uses a number of methods to compromise devices. For example, in addition to exploiting the EternalBlue vulnerability found in certain versions of Windows, it uses brute-force attacks against MS-SQL, Remote Desktop Protocol and Telnet, according to the Guardicore report. Once the botnet compromises the system, a PowerShell script named blueps.txt is downloaded onto the machine to run a number of operations, including downloading and executing three binary files - a worm downloader, a Trojan and a Master Boot Record (MBR) rootkit, Guardicore researchers found. Malicious payloads move through the network through the worm module. The PcShare open-source Trojan has a number of jobs, including acting as the command-and-control, capturing screenshots and stealing information, and most likely downloading a Monero cryptominer, the report notes. The group behind the botnet uses almost 20 scripts and binary payloads in its attacks. Plus, it uses various backdoors in different parts of the attack, the researchers report. Newly created users, scheduled tasks, Windows Management Instrumentation objects and services run when the system boots, Guardicore reports.


How to prevent lingering software quality issues


To build in quality, he advocates that IT undertake systematic approaches to software testing. In manufacturing, building in quality entails designing a process that helps improve the final product, while in IT that approach is about producing a higher-quality application. Yet, software quality and usability issues are, in many ways, harder to diagnose than problems in physical goods manufacturing. "In manufacturing, we can watch a product coming together and see if there's going to be interference between different parts," Gruver writes in the book. "In software, it's hard to see quality issues. The primary way that we start to see the product quality in software is with testing. Even then, it is difficult to find the source of the problem." Gruver recommends that software teams put together a repeatable deployment pipeline, which enables them to have a "stable quality signal" that informs the relevant parties as to whether the amount of variation in performance and quality between software builds is acceptable.


The arrival of 'multicloud 2.0'

The arrival of 'multicloud 2.0'
What’s helpful around the federated Kubernetes approach is that this architecture makes it easy to deal with multiple clusters running on multiple clouds. This is from using two major building blocks. First is the capability of syncing resources across clusters. As you may expect, this would be the core challenge for those deploying multicloud Kubernetes. Mechanisms within Kubernetes can automatically sync deployments on plural clusters, running on many public clouds. Second is intercluster discovery. This means the capability of automatically configuring DNS servers and load balancers with backends supporting all clusters running across many public clouds. The benefits of leveraging multicloud/federated Kubernetes include high availability, considering you can replicate active/active clusters across multiple public clouds. Thus, if one has an outage, the other can pick up the processing without missing a beat. Also, you avoid that dreaded provider lock-in. This considering that Kubernetes is the abstraction layer that’s able to remove you from the complexities and native details of each public cloud provider.


Microservices With Node.js: Scalable, Superior, and Secure Apps

Image title
Node.js is designed to build highly-scalable apps easier through non-blocking I/O and event-driven model that makes it suitable for data-centric and real-time apps. Node.js is highly suitable for real-time collaboration tools, streaming and networking apps, and data-intensive applications. Microservices, on the other hand, makes it easy for the developer to create smaller services that are scalable, independent, loosely coupled, and very suitable for complex, large enterprise applications. The nature and goal of both these concepts are identical at the core, making both suitable for each other. Together used, they can power highly-scalable applications and handle thousands of concurrent requests without slowing down the system. Microservices and Node.js have given rise to culture like DevOps where frequent and faster deliveries are of more value than the traditional long development cycle. Microservices are closely associated with container orchestration, or we can say that Microservices are managed by container platform, offering a modern way to design, develop, and deploy software.


Supply Chain Attacks: Hackers Hit IT Providers

Supply Chain Attacks: Hackers Hit IT Providers
Symantec says the group has hit at least 11 organizations, mostly in Saudi Arabia, and appears to have gained admin-level access to at least two organizations as part of its efforts to parlay hacks of IT providers into the ability to hack their many customers. In those two networks, it notes, attackers had managed to infect several hundred PCs with malware called Backdoor.Syskit. "This is an unusually large number of computers to be compromised in a targeted attack," Symantec's security researchers say in a report. "It is possible that the attackers were forced to infect many machines before finding those that were of most interest to them." Backdoor.Syskit is a Trojan, written in Delphi and .NET, that's designed to phone home to a command-and-control server and give attackers remote access to the infected system so they can push and execute additional malware on the endpoint, according to Symantec. The security firm first rolled out an anti-virus signature for the malware on Aug. 21. Symantec says attackers have in some cases also used PowerShell backdoors - also known as a living off the land attack, since it's tough to spot attackers' use of legitimate tools.



Quote for the day:


"A culture of discipline is not a principle of business; it is a principle of greatness." -- Jim Collins


Daily Tech Digest - September 19, 2019

Space internet service closer to becoming reality

Space internet service closer to becoming reality
Interestingly, though, a SpaceX filing made with the U. S. Federal Communication Commission (FCC) at the end of August, seeks to modify its original FCC application because of results it discovered in its initial satellite deployment. SpaceX is now asking for permission to “re-space” previously authorized, yet unlaunched satellites. The company says it can optimize its constellation better by spreading the satellites out more. “This adjustment will accelerate coverage to southern states and U.S. territories, potentially expediting coverage to the southern continental United States by the end of the next hurricane season and reaching other U.S. territories by the following hurricane season,” the document says. Satellite internet is used extensively in disaster recovery. Should SpaceX's request be approved, it will speed up service deployment for continental U.S. because fewer satellites will be needed. Because we are currently in a hurricane season (Atlantic basin hurricane seasons last from June 1 to Nov. 30 each year), one can assume they are talking about services at the end of 2020 and end of 2021, respectively.



Windows Defender malware scans are failing after a few seconds

The issue has been widely reported over the past two days on the Microsoft tech support forums, Reddit, and tech support sites like AskWoody, DeskModder, BornCity, and Bleeping Computer. The bug impacts Windows Defender version 4.18.1908.7 and later, released earlier this week. The bug was introduced while Microsoft tried to fix another bug introduced with the July 2019 Patch Tuesday. Per reports, the original bug broke "sfc /scannow," a command part of the Windows System File Checker utility that lets Windows users scan and fix corrupted files. After the July Patch Tuesday this utility started flagging some of Windows Defender's internal modules as corrupted, resulting in incorrect error messages that fooled admins into believing there was something wrong with their Windows Defender installation, and its updates. Microsoft announced a fix for the System File Checker bug in August, but the actual patch was delayed. When the fix arrived earlier this week, it didn't yield the expected results.


What does upstream and downstream development even mean?


If the flow of data goes toward the original source, that flow is upstream. If the flow of data goes away from the original source, that flow is downstream. ... The idea that either upstream or downstream could be superior depends on the commit. Say, for example, the developer of Application B makes a change to the application that adds a new feature unique to B. If this feature has no bearing on Application A, but does have a use in Application D, the only logical flow is downstream. If, on the other hand, the developer of Application D submits a change that would affect all other applications, then the flow should be upstream to the source (otherwise, the change wouldn't make it to applications B or C). ... An upstream flow of data has one major benefit (besides all forks gaining access to the commit). Let's say you're the developer of Application B and you've made a change to the core of the software. If you send that change downstream, you and the developer of D will benefit. However, when the developer of Application A makes a different change to the core of the software, and that change is sent downstream, it could overwrite the commit in Application B.


Soft Skills: Controlling your career

Projecting positivity is also a soft skill. The reality is that a busy IT department will achieve a lot and there is much to focus on. Of the technical people I know most are passionate about what they do. Passion drives excellence but it also has a dark side that we see manifest in various IT "religious wars". It narrows the focus, closes the mind and prevents us from acknowledging any evidence that contradicts our beliefs. Passion is also a big turn off for senior executives who tend to prefer calmness. It is difficult to get the balance right between passion & dispassion. The best advice I have been given is that it is OK to hold strong opinions but important to hold them loosely. By all means be passionate and use it to drive you to put forward the best possible case for your chosen subject but accept that others will have equally passionate views and either, or both, of you may be wrong. If you are not passionate then you won't put forward convincing arguments or test hypothesis with sufficient rigour.


Creating ASP.NET Core Application with Docker Support

Image 1
Docker contains Operating System, Source code, Environment variables (if any) and Dependent components to run the software. So, if anyone wants to run your software, they can simply take the container and get started, without putting effort to do the machine set up to make things work. ... Many times, you must have heard developers saying – it is working fine on my machine, but I don’t know what is missing on your machine or say why the same software is not working on your machine? Such discussions usually pop up during the testing phase and as my personal experience, sometimes it takes hours to identify that small missed out dependency. Here, Docker comes to the rescue. As it is containerization, each and every dependency is packed in the form of containers and is available for both Linux and Windows. Hence, everyone using the software will have the same environment. Basically, the concept of docker has completely vanished the problem of mismatch environments. Isn’t it amazing?


Why businesses would rather lose revenue than data


A big reason for cybersecurity issues is the lack of IT talent in SMBs, the report found. Half of businesses said they only provide a one-time security awareness IT training to staff. To solve for the skills gap, a third of companies (33%) said they currently outsource some of their IT activities, and another 40% said they plan to do so.  Regardless, SMBs need a plan. "With regards to addressing security concerns, it's important to have several layers of security so that there's no way an outside 'silver bullet' can penetrate a system," Claudio said. "Making sure staff are aware of potential security threats, like phishing scams, is also crucial as they will usually be your first line of defense. Patch management and vulnerability assessment are also mission critical." ...  "To support business continuity, it's important to have a great backup and disaster recovery program including off-site data copy in the event of an emergency," Claudio noted. "Again, making sure you have access to the right IT resources and skill sets by utilizing a trusted outsourced service provider is essential."


Oracle goes all in on cloud automation

Talk to the cloud: Oracle rolls out more conversational interfaces at OpenWorld 2019
“Digital assistants and conversational UI are going to transform the way we interact with these applications, and just make things a lot easier to deal with,” Miranda says. They will also enable supply chain managers to check on delivery status, track deviations and report incidents, Oracle’s goal being to enable root-cause analysis of supply chain problems via the chat interface. In HR, Oracle HCM Cloud will chat with employees about onboarding and accessing their performance evaluations, while sales staff will be able to configure quotes using voice commands, Oracle says. Oracle and Amazon are famously combative, but Oracle is starting to adopt the same terminology Amazon uses for its Alexa virtual assistant, referring to extended dialogs to accomplish a goal as “conversations” and tasks that its digital assistants can help with as “skills.” R. “Ray” Wang, founder and principal analyst at Constellation Research, says Oracle’s effort to weave AI into all its apps is paying off. ... “It’s the long-term performance improvement of feedback loops. The next best actions are more than rudimentary. Think of the Digital Assistants plus Intelligent Document Recognition, and predictive planning as all tools to help drive more automation and augmented decisions in enterprise apps.”


Strengthen Distributed Teams with Social Conversations

"Cognitive trust is based on the confidence you feel in another person’s accomplishments, skills, and reliability while affective trust, arises from feelings of emotional closeness, empathy, or friendship." In your team, trust might be developed and sustained between individuals in different ways. Some of you will be looking out for how much others fulfill their offer of help, whether they deliver their work on time, and if their work is of high quality. Meanwhile, others will be looking for a more personal or social connection, looking for things they have in common with others—which is easier to find out during real-time conversations. Getting to know each other well requires having a mental image of the person, hearing their voice, seeing their facial expressions, and online meetings can help us achieve this. In this article, I suggest two ways to use meetings to strengthen your team relationships—incorporate social conversations into your scheduled meetings and hold online meetings for the specific purpose of reconnecting as colleagues.


DevSecOps veterans share security strategy, lessons learned


Once DevOps and IT security teams are aligned, the most important groundwork for improved DevOps security is to gather accurate data on IT assets and the IT environment, and give IT teams access to relevant data in context, practitioners said. "What you really want from [DevSecOps] models is to avoid making assumptions and to test those assumptions, because assumptions lead to vulnerability," Vehent said, recalling an incident at Mozilla where an assumption about SSL certificate expiration data brought down Mozilla's add-ons service at launch. ... Once a strategy is in place, it's time to evaluate tools for security automation and visibility. Context is key in security monitoring, said Erkang Zheng, chief information security officer at LifeOmic Security, a healthcare software company, which also markets its internally developed security visibility tools as JupiterOne. "Attackers think in graphs, defenders think in lists, and that's how attackers win," Zheng said during a presentation here. "Stop thinking in lists and tables, and start thinking in entities and relationships."


Cisco spreads ACI to Microsoft Azure, multicloud and SD-WAN environments

access control / authentication / privileges / managing permissions
Key new pieces of ACI Anywhere include the ability to integrate Microsoft Azure clouds and a cloud-only implementation of ACI. Cisco has been working closely with Microsoft, and while previewing the Azure cloud support earlier this year it has also added Azure Kubernetes Service (AKS) to managed services that natively integrate with the Cisco Container Platform. With the Azure cloud extension the service uses the Cisco Cloud Cloud APIC, which runs natively in Azure public cloud to provide automated connectivity, policy translation and enhanced visibility of workloads in the public cloud, Cisco said. With new Azure extensions, customers can tap into cloud workloads through ACI integrations with Azure technologies like Azure Monitor, Azure Resource Health and Azure Resource Manager to fine-tune their network operations for speed, flexibility and cost, Cisco stated. As part of the Azure package, the Cisco Cloud Services Router (CSR) 1000V brings connectivity between on-premises and Azure cloud environments.




Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones


Daily Tech Digest - September 18, 2019

The Seven Patterns Of AI

The Seven Patterns of AI
From autonomous vehicles, predictive analytics applications, facial recognition, to chatbots, virtual assistants, cognitive automation, and fraud detection, the use cases for AI are many. However, regardless of the application of AI, there is commonality to all these applications. Those who have implemented hundreds or even thousands of AI projects realize that despite all this diversity in application, AI use cases fall into one or more of seven common patterns. The seven patterns are: hyperpersonalization, autonomous systems, predictive analytics and decision support, conversational/human interactions, patterns and anomalies, recognition systems, and goal-driven systems. Any customized approach to AI is going to require its own programming and pattern, but no matter what combination these trends are used in, they all follow their own pretty standard set of rules. ... While these might seem like discrete patterns that are implemented individually in typical AI projects, in reality, we have seen organizations combine one or more of these seven patterns to realize their goals. By companies thinking of AI projects in terms of these patterns it will help them better approach, plan, and executate AI projects. In fact, emerging methodologies are focusing on the use of these seven patterns as a way to expedite AI project planning.



Aliro aims to make quantum computers usable by traditional programmers


Stages of quantum computing are generally divided into quantum supremacy—the threshold at which quantum computers are theorized to be capable of solving problems, which traditional computers would not (practically) be able to solve—is likely decades away. While quantum volume, a metric that "enables the comparison of hardware with widely different performance characteristics and quantifies the complexity of algorithms that can be run," according to IBM, has gained acceptance from NIST and analyst firm Gartner as a useful metric. Aliro proposes the idea of "quantum value," as the point at which organizations using high performance computing today can achieve results from using quantum computers to accelerate their workload. "We're dealing with enterprises that want to get business value from these machines…. "We're not ready for many levels of abstraction above the quantum hardware, but we're ready for a little bit. When you get down to the equivalent of the machine language, these things are very, very different, and it's not just what kind of qubits they are. It's noise characteristics, it's connectivity," Ricotta said. "Riggeti and IBM Q machines both use superconducting Josephson junctions around the same number—approximately, the same order of magnitude of qubits—but they are connected in different ways ..."


New hacking group targets IT companies in first stage of supply chain attacks


In two of the attacks, researchers found that hundreds of computers were compromised with malware, indicating that the attackers were simply infecting all the machines they could throughout the organisations in order to find key targets. The most recently recorded activity from Tortoiseshell was in July 2019, with attacks by the group identified by a unique custom payload: Backdoor.Syskit. This malware is built in both Delphi and .NET programming languages and secretly opens an initial backdoor onto compromised computers, allowing attackers to collect information including the IP address, the operating system version and the computer name. Syskit can also download and execute additional tools and commands, and Tortoiseshell attacks also deploy several publicly available tools as information stealers to gather data on user activity. While it remains uncertain how the malware is delivered, researchers suggest that it could potentially be distributed via a compromised web server, because in one instance the first indication of malware on the network was a compromised web shell – something that can provide an easy way into a targeted network.


How Ransomware Criminals Turn Friends into Enemies

As someone whose job it is to learn as much as possible about the online criminal ecosystem, I often spot trends before they make mainstream headlines. This type of attack was high on my list of attacks likely to increase. Supply chain attacks aren't new. They've been increasing in frequency, however, and gaining more attention. While there are many types of supply chain attacks, this particular type — compromising a service provider to gain access to its customers — is becoming more popular among skilled ransomware crews. ... Managing IT can be hard, especially for small and midsize businesses lacking the necessary resources. It probably seemed like a great idea for these small dental practices to outsource IT to Digital Dental Record. They're not alone. The managed services industry is growing extremely fast with businesses struggling to manage the technology required to run a modern establishment. With attacks on MSPs on the rise, MSPs need to step up their security game, regardless of the kind of specialized services they provide.


AI in cyber security: a necessity or too early to introduce?

AI in cyber security: a necessity or too early to introduce? image
Dr Leila Powell, lead security data scientist from Panaseer, agrees that “the key challenge for most security teams right now is getting hold of the data they need in order to get even a basic level of visibility on the fundamentals of how their security program is performing and how they measure up against regulatory frameworks like GDPR. This is not a trivial task! “With access to security relevant data controlled by multiple stakeholders from IT to MSSPs and tool vendors there can be a lot of red tape on top of the technical challenges of bringing together multiple siloed data sources. Then there’s data cleaning, standardisation, correlation and understanding — which often require a detailed knowledge of the idiosyncrasies of all the unique datasets. “As it stands, once all that work has gone in to data collection, the benefits of applying simple statistics cannot be underestimated. These provide plenty of new insights for teams to work through — most won’t even have the resources to deal with all of these, let alone additional alerting from ML solutions.


2019 Digital operations study for energy

Looking ahead to the next five years, the picture improves somewhat and offers more hope for the utilities sector. For instance, of the EMEA utilities surveyed by Strategy&, 5 percent said they had already implemented AI applications and another 9 percent sa they had piloted such programs. That compares with 20 percent and 6 percent, respectively, for chemicals companies. But through 2024, including planned technologies, AI adoption in the utilities sector may increase by another 15 percent, according to the survey, and that would be on par with chemicals companies and just below oil and gas AI implementation. ... Many utilities make the mistake of trying to implement too many ambitious digital strategies at the same time and end up spreading their financial and staff resources, as well as their capabilities, too thin. A better approach is to define the three to five critical digitization efforts that are strategically essential to defending and expanding competitive advantage among startups and established power companies.


Microsoft brings IBM iron to Azure for on-premises migrations

Microsoft brings IBM iron to Azure for on-premises migrations
Under the deal, Microsoft will deploy Power S922 servers from IBM and deploy them in an undeclared Azure region. These machines can run the PowerVM hypervisor, which supports legacy IBM operating systems, as well as Linux. "Migrating to the cloud by first replacing older technologies is time consuming and risky," said Brad Schick, CEO of Skytap, in a statement. "Skytap’s goal has always been to provide businesses with a path to get these systems into the cloud with little change and less risk. Working with Microsoft, we will bring Skytap’s native support for a wide range of legacy applications to Microsoft Azure, including those dependent on IBM i, AIX, and Linux on Power. This will give businesses the ability to extend the life of traditional systems and increase their value by modernizing with Azure services." As Power-based applications are modernized, Skytap will then bring in DevOps CI/CD toolchains to accelerate software delivery. After moving to Skytap on Azure, customers will be able to integrate Azure DevOps, in addition to CI/CD toolchains for Power, such as Eradani and UrbanCode.


Prepare for cloud security and shared responsibility


IT infrastructure teams typically control the platform from the ground up and through the OS layer. Admins work with security teams to ensure platforms are hardened and adhere to compliance needs. After the platform is built, infrastructure and security teams turn it over to the dev or application owners for final installations and deployments. Application owners still work with an infrastructure team to ensure security and compliance measures are maintained through the deployment process. Ideally, the platform gets a final verification from the security team. The same parties will still be involved and maintain that level of ownership and responsibility even if an organization uses automation. But this process gets upended when a cloud provider gets involved. AWS manages the hypervisor, hardware and, in some cases, the OS. This means the deployment process starts in the middle of the traditional application lifecycle rather than at the beginning. Admins have to find a way to contribute in an ecosystem where the infrastructure is run by another party.


Digital dexterity: What it is, why your organization needs it, and how CIOs can lead the charge


If you're not sure what digital dexterity is, you aren't alone. Craig Roth, Gartner Research vice president, explained it as "the ability and ambition to use technology for better business outcomes."  That definition can still seem a bit fuzzy if you aren't sure where ability and ambition come in to the successful use of tech in business, but digging down just a bit helps make the whole thing more understandable. Helen Poitevin, vice president and analyst at Gartner, expands the definition of digital dexterity by adding that it's less about tech skills and more about "a specific set of mindsets, beliefs and behaviors." ... So, where does the CIO fit into all of this? They're basically the cornerstone of the entire concept, said Daniel Sanchez Reina, senior director and analyst at Gartner. "The CIO will play a key role in supporting desired behaviors and changing the processes, procedures, policies and management practices that shape how work gets done to encourage desired behaviors." It can be tough to transform an entire organization from one that resists, or at the very least grudgingly accepts, new technology. CIOs have a tough road ahead of them, but that doesn't mean it's impossible.


New ransomware strain uses ‘overkill’ encryption to lock down your PC


FortiGuard Labs says that 2048 and 4096 strings are generally more than adequate to encrypt and secure messages, and so the use of an 8192 size is "overkill and inefficient for its purpose." "Using the longer key size adds a large overhead due to significantly longer key generation and encryption times [...] RSA-8192 can only encrypt 1024 bytes at a time, even less if we consider the reserved size for padding," the researchers note. "Since the configuration's size will surely be more than that due to the fact that it contains the encoded private key, the malware cuts the information into chunks of 1000 (0x3e8) bytes and performs multiple operations of the RSA-8192 until the entire information is encrypted." The heavy use of encryption means that it is "not practically possible" to decrypt a compromised system, according to the cybersecurity firm. This is unfortunate, as decryption programs offered by cybersecurity firms can sometimes be the only way to recover files lost to ransomware infections without paying up.



Quote for the day:


"Don't measure yourself by what you have accomplished. But by what you should have accomplished with your ability." -- John Wooden