Is Your Cybersecurity Workforce Ready To Win Against Cybercriminals?
A trained staff is a critical business asset when it comes to handling information security projects. Whether your company is involved in a simple privileged access management (PAM) project or implementing a complex continuous adaptive risk and trust assessment (CARTA-based) strategy design, success depends on employee competency. Now that you have a training plan, implement it by assigning specific information security training certifications or training modules to each employee and measure the effectiveness and quality of execution against your business goals. ... The ultimate goal is to foster a cybersecurity culture across the organization. This is a tough task because it involves the human aspects of cybersecurity. Be prepared for resistance, and plan efforts to address employee concerns in an understanding and open manner. Empathy will get you to your goals faster than issuing strict directives and hoping employees will follow. Make cybersecurity practices a routine part of your business processes as well as strategic concerns. This 360-degree approach will become your best defense against information security risks.
For enterprise developers attempting to meet the highly specialized needs of a vertical and tech-savvy users' expectations, low-code platforms are a way to handle the scalability, data management, architecture and security concerns that hold back internal bespoke software projects. To be worth the money, a low-code platform must be flexible enough to build almost any app securely, even if it's only for internal users, said AbbVie's Cattapan. Low-code examples at the company range from a shipment management app to track chemicals around its labs and manufacturing campus, to a reporting app related to drug approval rules in more than 200 countries. To work for these purposes, a low-code platform has to scale in diverse situations: "We might have a really large dataset ... and we want the app server next to the data, but we also want the option to have it up in the cloud," Cattapan said.
While many of the security issues at Equifax in 2017 have been discussed in lawsuits, investigations and news media reports, the new indictments offer some additional details of what happened staring in May of that year. After exploiting the vulnerability in Apache Struts, the hackers allegedly gained access to Equifax's online dispute portal in order to gain a foothold within the corporate network and steal more credentials, according to the indictment. After that, the four hackers spent several weeks mapping the network and running queries to understand what databases they could access and which ones held the personal data and intellectual property they were seeking, the indictment says. The hackers ran about 9,000 queries within the network over the course of several months, it adds. "Once they accessed files of interest, the conspirators then stored the stolen information in temporary output files, compressed and divided the files, and ultimately were able to download and exfiltrate the data from Equifax's network to computers outside the United States," prosecutors say.
Though many may imagine servers as rows of tall, boxy machines, in recent years servers have gone mobile, enabling edge computing on the road. Vehicle servers are a boon to law enforcement officers, who can avoid spending precious time on tasks such as manually keying in a license plate number to check suspicious vehicles. Police cruisers equipped with servers such as NEXCOM's MVS series of vehicle servers powered by Intel® Core and Intel Atom processors can quickly decode images of cars taken by a cruiser's rooftop camera, identify license plates, and determine whether they're listed in a database of vehicles of interest to law enforcement. ... Machines can see what humans miss. Imagine a failing motor on a factory floor begins to vibrate more quickly. That initial, negligible acceleration won't be noticeable to workers. But an electronic vibration sensor detects it, triggering analysis by predictive maintenance software. The software notifies personnel, who address the problem before it leads to a costly equipment breakdown. Edge computing helps manufacturers make the most efficient use of predictive maintenance technology.
IBM highlights new approach to infuse knowledge into NLP models
There have been two schools of thought or "camps" since the beginning of AI: one has focused on the use of neural networks/deep learning, which have been very effective and successful in the past several years, said David Cox, director for the MIT-IBM AI Watson Lab. Neural networks and deep learning need data and additional compute power to thrive. The advent of the digitization of data has driven what Cox called "the neural networks/deep learning revolution." Symbolic AI is the other camp and it takes the point of view that there are things you know about the world around you based on reason, he said. However, "all the excitement in the last six years about AI has been about deep learning and neural networks,'' Cox said. Now, "there's a grouping idea that just as neural networks needed something like data and compute for a resurgence, symbolic AI needed something,'' and the researchers theorized that maybe what it needs is neural networks, he said. There was a sense among researchers that the two camps could complement each other and capitalize on their respective strengths and weaknesses in a productive way, Cox said.
The Kongo Problem: Building a Scalable IoT Application with Apache Kafka
Kafka is a distributed stream processing system which enables distributed producers to send messages to distributed consumers via a Kafka cluster. Simply put, it’s a way of delivering messages where you want them to go. Kafka is particularly advantageous because it offers high throughput and low latency, powerful horizontal scalability, and the high reliability necessary in production environments. It also enables zero data loss, and brings the advantages of being open source and a well-supported Apache project. At the same time, Kafka allows the use of heterogeneous data sources and sinks – a key feature for IoT applications that can leverage Kafka to combine heterogeneous sources into a single system. In order to achieve high throughput, low latency and horizontal scalability Kafka was designed as a "dumb" broker and a "smart" consumer. This results in different trade-offs in functionality and performance compared to other messaging technologies such as RabbitMQ and Pulsar
Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens
“Deep Instinct is the first and currently the only company to apply end-to-end deep learning to cybersecurity,” he said in an interview. In his view, this provides a more advanced form of threat protection than the common traditional machine learning solutions available in the market, which rely on feature extractions determined by humans, which means they are limited by the knowledge and experience of the security expert, and can only analyze a very small part of the available data (less than 2%, he says). “Therefore, traditional machine learning-based solutions and other forms of AI have low detection rates of new, unseen malware and generate high false-positive rates.” There’s been a growing body of research that supports this idea, although we’ve not seen many deep learning cybersecurity solutions emerge as a result (not yet, anyway). He adds that deep learning is the only AI-based autonomous system that can “learn from any raw data, as it’s not limited by an expert’s technological knowledge.” In other words, it’s not based just on what a human inputs into the algorithm, but is based on huge swathes of big data, sourced from servers, mobile devices and other endpoints, that are input in and automatically read by the system.
What Differentiates AI Leaders, According To A Founder Of Globant
Given that AI is so laden with ambiguity, companies often lack clarity in terms of determining what AI can do for them and how they can build roadmaps that will empower them to most effectively implement the technology. What’s more, half of the organizations don’t have a clear definition of how employees and AI will most productively work together. In order to succeed, organizations must work to define the role of AI in their workplace and the ideal relationship between AI and employees. Armed with this knowledge, organizations will be primed to adopt the most appropriate AI solution for their business and customer needs. Recognizing that companies face an uphill battle to understand how AI can help them realize their organizational objectives, Globant has embraced a unique organizational structure called “Agile Pods.” Pods are multidimensional teams comprised of members from Globant’s various Studios that serve as customer-facing service delivery teams and help ensure that its solutions are built and implemented with a customer-first mindset.
Rethinking change control in software engineering
Programmers that make mistakes with their conditional feature flags could accidentally deploy a change to production when it is supposed to stay dark, which means they might not be able to roll it back -- not easily, at least. The key to using feature flags is to place them where they make sense and to diligently make smart decisions regarding the risk they create. A key issue in change control in software engineering is figuring out who change control affects and how it affects them. If nearly everyone is affected by a change -- a likely scenario for teams contributing to a single mobile app deployment -- there tends to be heavy regression testing, triage meetings, go/no go meetings and documentation. This bureaucratic process often adds cost and delays, and it can be difficult to see where exactly the process provides value. One way to cut away barrier-inducing change control processes is to isolate the impact of changes.
5 biggest mistakes developers can make in a job interview
Interviews can be nerve-racking, but developers must avoid letting that apprehension take over their thought processes, said Tomás Pueyo, vice president of growth at Course Hero. "The biggest mistake I see when interviewing tech candidates is jumping to solutions before understanding the problem," Pueyo said. "Candidates are eager to answer questions, so they believe the faster they come up with a solution, the cleverer they will sound. But this is not what our job is about." "In tech, we deal with massive amounts of data, solving problems that are frequently unclear. A key marker of wisdom is taking a step back, gathering all the available information, understanding it, and only then jumping to solutions," Pueyo added. While interviews do focus on questioning the interviewee, the candidate should also have their own questions prepared, Hill said. "As a hiring manager, I expect the candidate to come with their own questions. That's how I know that they're enthusiastic about the company, and that they're eager to learn and improve," Hill noted.
Quote for the day:
"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold
No comments:
Post a Comment