Mastering Top 5 Skills For Tech Support Jobs with Upspir

Master the top 5 essential skills for a successful tech support career with UPSPIR. Enhance your technical foundation, communication, troubleshooting, time management, and exploration abilities to excel in this rapidly growing field. Start your journey today and unlock rewarding opportunities in the tech industry!

Technical Skills Alone are Not Enough in the IT Industry! 

Are you an aspiring IT professional? Wondering what soft skills you need to succeed in the industry? Check out our blog post on the importance of soft skills for IT professional aspirants. From communication and problem-solving skills to adaptability and teamwork, we list the top soft skills you need to succeed in the IT industry. Don’t miss out – read our blog post now!

Vim Essentials: Tips and Tricks for Boosting Productivity

Do you spend a lot of time working in a command-line environment, editing and managing files using text editors? If so, then you know that having a good command over these tools can significantly increase your productivity and add value to your work. That’s where Vim comes in. If you’re new to Vim or just looking to brush up your skills, then come take a dive with me as we explore how Vim can benefit you and make your work more productive and enjoyable.

How to Prepare for a Technical Support Job Interview: Tips and Tricks

As you embark on your journey to land your dream technical job, it is important to note that technical interviews can be a bit tricky to navigate. However, with proper preparation and an understanding of the skills required for the job, you can ace any technical interview. In this article, we will outline how to prepare for a technical job interview and provide you with some tips to help you excel.

The Benefits of a Career in Technical Support

Technical support job may not be the first choice that comes to mind when anyone considers getting into the tech industry, but it is a great career option. Here we are making a case for technical support jobs and why it is the perfect entry to the tech industry. If you are interested in starting your career in tech, you’re at the right place.

10 Must Have Soft Skills for A Techie

For building a career in tech you definitely need to have strong technical skills, but equally (if not more) soft skills are important deciding factor for your selection in your dream tech job. In order to increase your chances of selection and a successful career, you need to equip your self with these soft skills. Here we look at the top ten soft skills you must posses for a thriving career as a techie.

Load Average in Linux Servers – Confusion Solved

With Linux distributions acting as servers for majority of web applications and Software as a Service, Infrastructure as a Service, etc. platforms, managing Load on the installations is one of the most important tasks for system admins, SRE’s and technical support teams

How is Load measured

In linux load is measured in two dimensions

CPU Load : it is a measurement of CPU over / under utilization,

Load Average : it is the average system load over a period of time.

Lets see this in detail

Load Average can be found by running top command or uptime command as shown below

There are many misconceptions / doubts on how we understand the Load Average as shown in Linux

  • What is load average as shown in top command ?
  • What this values represent ? 
  • When it will be high, when low ?
  • When to consider it as critical ? 
  • In which scenarios it can increase ? 

In this Blog we will talk about the answers of all these  . 

What are these three values shown in above image  ? 

The three numbers represent averages over progressively longer periods of time (one, five, and fifteen-minute averages)  and that lower numbers are better. Higher numbers represent a problem or an overloaded machine . 

Now before getting into what is good value, what is bad value , what are the reasons which can affect these values  , We will understand these on  a machine with one single-core processor. 

The traffic analogy

A single-core CPU is like a single lane of traffic. Imagine you are a bridge operator … sometimes your bridge is so busy there are cars lined up to cross. You want to let folks know how traffic is moving on your bridge. A decent metric would be how many cars are waiting at a particular time. If no cars are waiting, incoming drivers know they can drive across right away. If cars are backed up, drivers know they’re in for delays.

So, Bridge Operator, what numbering system are you going to use? How about:

  • 0.00 means there’s no traffic on the bridge at all. In fact, between 0.00 and 1.00 means there’s no hold up, and an arriving car will just go right on.
  • 1.00 means the bridge is exactly at capacity. All is still good, but if traffic gets a little heavier, things are going to slow down.
  • over 1.00 means there’s holdup. How much? Well, 2.00 means that there are two lanes worth of cars total — one lane’s worth on the bridge, and one lane’s worth waiting. 3.00 means there are three lanes worth total — one lane’s worth on the bridge, and two lanes’ worth waiting. Etc.

Like the bridge operator, you’d like your cars/processes to never be waiting. So, your CPU load should ideally stay below 1.00. Also, like the bridge operator, you are still ok if you get some temporary spikes above 1.00 … but when you’re consistently above 1.00, you need to worry.

So you’re saying the ideal load is 1.00?

Well, not exactly. The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70:

But now a days we many multiple cores systems or multiple processors system  .

Got a quad-processor system? It’s still healthy with a load of 3.00.

On a multi-processor system, the load is relative to the number of processor cores available. The “100% utilization” mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.

If we go back to the bridge analogy, the “1.00” really means “one lane’s worth of traffic”. On a one-lane bridge, that means it’s filled up. On a two-lane bridge, a load of 1.00 means it’s at 50% capacity — only one lane is full, so there’s another whole lane that can be filled.

Same with CPUs: a load of 1.00 is 100% CPU utilization on a single-core box. On a dual-core box, a load of 2.00 is 100% CPU utilization.

Which leads us to two new Rules of Thumb:

  • The “number of cores = max load” Rule of Thumb: on a multicore system, your load should not exceed the number of cores available.
  • The “cores is cores” Rule of Thumb: How the cores are spread out over CPUs doesn’t matter. Two quad-cores == four dual-cores == eight single-cores. It’s all eight cores for these purposes.

What if the load goes beyond the number of cores

But What to extract from here that if load is going beyond number of Cores , Are we in crunch of Cores ? 

Well  Not exactly , For this we need to further debug or analyse TOP command data to come to a conclusion.

Impact of User Process and System Process

In above output , This coloured part shows CPU used by

  • user process(us) and
  • system process(sy) .

Now if these values are around 99-100%, it means there is crunch of cpu cores on your system or some process is consuming more CPU . So, in this case either increase cores or optimize you application which is consuming more CPU . 

Now let’s take another scenario : 

Impact of disk read / write speed

In above image  , coloured parts shows amount of time CPU is waiting in doing Input/Output(I/O) .  So say if this values is going above say 80% ,then also load average on server will increase  . It means either

  1. you disk read/write speed is slow  or
  2. your applications is reading/writing too much on your system beyond system capability .

In this case either diagnose your hard disk read/write speed or check why your application is reading/writing so much . 

Now let’s take one more scenario :

Impact of softirq’s

If values Above coloured output goes beyond certain limit , it means softirq(si) are consuming cpu. It can be due to network/disk interrupts . Either they are not getting enough CPU   or there is some misconfiguration in you ethernet ports due to which  interrupts are not able to handle packets receiving or transmitting  . These types of problem occurs more on VM environment rather than physical machine.

Now , Lest take one last scenario : 

Impact of CPU Steal Time

This above part will help you in case of Virtual Machine Environment  . 

If %st increases to certain limit say it is remaining more than 50% , it means that you are getting half of CPU time from base machine and someone else is consuming you CPU time as you may be on shared CPU infra  . In above case also Load can increase .

Summary

I hope it covers all major scenarios in which load average can increase . Understanding different parameters from the output for top command can help you understand the performance of your linux server better.

Stay tuned to know more about further analyzing and answering questions like how to check read/write speed, how to check which application is consuming more CPU and how to check which interrupts are causing problem.

Refrences : https://scoutapm.com/blog/understanding-load-averages

See Original Posts at hello-worlds

and at medium

..

..

Author

Sahil Aggarwal
Senior Software Engineer 3/Tech Lead at Ameyo

I am a Tech Professional working in IT industry from 10 years . I have worked on many technologies like java, databases, linux, netrowking, security and lot more . I generally believe in taking outside-in approach to any new product you are looking for . This behaviour helped me lot in debugging issues of new products or even issues of any other system throughout by journey. I like to write tech blogs and I have my personal blog sites also, I write on medium and dzone also . I am very adaptive working between Individual Contributor and as a Leader/Manager. Apart from technology, I like eating fast and junk food . I also like to listen and write shyari.

Uses of Tshark/Wireshark for beginners

Most of the time when we connect to the internet, we don’t think about the network protocols which work behind that make it all possible. Right now, while you are reading this article, many packets are being exchanged by your computer and traveling across the internet.

To understand these protocols, you need a tool that can capture and help you analyze these packets. Wireshark is a popular open source graphical user interface (GUI) tool for analyzing packets. However, it also provides a powerful command-line utility called TShark for people who prefer to work on the Linux command line.

Check your installation

First, ensure the required packages are installed:

# rpm -qa | grep -i wireshark

If the Wireshark package is installed, check whether the TShark utility is installed and, if so, which version:

# tshark -v

If you are logged in as a regular, non-root user, you need sudo rights to use the TShark utility. Root users can skip sudo and directly run the tshark command.

Useful tshark commands

  1. All tshark commands displayed on your machine
    • # sudo tshark -h
  2. 2. Capture network traffic with tshark by providing interface
    • # sudo tshark -i <interface>
  3. Capture network packets and copy in file traffic-capture.pcap
    • By using -w options, user can easily copy all output of tshark tool into single file of format pcap.
    • tshark -i <interface> -w <file-name>.pcap
    • Read captured packets with tshark by providing input pcap file
  4. By using option -r with tshark, user can read saved pcap file easily.
    1. tshark -r <file-name>.pcap
    2. Capture packets and copy traffic into .pcap file for the particular duration
  5. If user wants to capture network traffic from the live network for a specific period of time, just use -a option. Below command helps you to capture traffic for a particular duration.
    1. tshark -i <interface> -a duration:<time>
  6. Capture the specific number of packets
    1. tshark tool provide flexibility to user to display specific number of captured packets.
    2. tshark -c <number> -i <interface>
  7. Capture only packets from the specific source or destination IP
    • This is most used command by security researchers and network engineers. If you want to filter traffic based on specific IP, use -f option.
    • tshark -i <interface> -f "host <IP>"
  8. Capture only specific protocol network packets
    1. Below example shows how you can filter specific protocol while displaying results of tool tshark.
    2. tshark -i <interface> -f "<protocol>"
    3. Note: <protocol> may be tcp, udp, dns etc.

Conclusion

This short tutorial equipped you to initiate the use of tshark in analyzing network traffic. You can use different options in the same command to filter results more specific to your requirement.

..

..

Author

Pravin Tewari
Senior Manager, Application and Cloud Support

Pravin is a visionary professional with over 11 years of experience in Technical Support, Cloud Infrastructure Management, and Customer Experience. He has hands-on experience in working across the lifecycle of project delivery and deployment, solution consulting, and support. He has deep experience in managing cloud deployments and implementing DevOps tools for automation to provide better uptime. Pravin has successfully led large product & cloud support teams, and coached & mentored a high-performing team that delivers high-quality service to customers.

Practical grep commands examples useful in real-world debugging

While troubleshooting any issue,  log analysis is the most important step.  Mostly the log files capture enormous amount of information, and reading those becomes a difficult and time consuming task. In our daily debugging we need to analyze logs files of various products.

 Analyzing the logs to isolate and resolving the issues, can be complex and requires special debugging skills which are gained through experience or by god’s grace .  During debugging we might need to extract some data from the log files, or we need to play with a log file which can not be done by just reading through the log file line by line , there is need for special commands to reduce the overall efforts and provide the specific information we seek.

There are many commands in linux which are used by debuggers like grep, awk, sed, wc, taskset, ps, sort, uniq, cut, xargs etc .

In this blog we will see some examples of practical usage of <strong>grep</strong>,  useful in real world debugging&nbsp; in Linux . The examples which we will see in this blog are super basic but very useful in real life which a beginner should read to enhance the debugging skills

Grep (global search for regular expression and print out)</strong> is a linux command searches a file for a given pattern, and displays the lines which match the pattern.  The pattern is also referred to as regular expression.

Let’s Go to the Practical Part.

Lets say we have a file “”file1.log””, which has following lines.

root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer

Search the lines which contains some particular word

root@localhost playground]# grep 'sahil' file1.log
i am sahil
sahil is a software engineer

Search number of lines matched for a particular word in a file

grep -c 'sahil' file1.log
2

Another way :

grep 'sahil' file1.log | wc -l
2

Search all the lines which contains some word (case insensitive)

root@localhost playground]# grep -i 'sahil' file1.log
i am sahil
Sahil is a software engineer
sahil is a software engineer

Search the lines in which either of two words are present in a file

root@localhost playground]# grep 'sahil|software' file1.log
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer

Search lines in which two words are present

root@localhost playground]# grep 'sahil' file1.log | grep 'software'
sahil is a software engineer

Search lines excluding some word

root@localhost playground]# grep -v 'sahil' file1.log
hello
i am software engineer
Sahil is a software engineer

Exclude words case insensitively

root@localhost playground]# grep -iv 'sahil' file1.log
hello
i am software engineer

Search the lines that start with a string

root@localhost playground]# grep '^sahil' file1.log
sahil is a software engineer

Search the lines that end with a string

grep 'engineer$' file1.log
i am software engineer
Sahil is a software engineer
sahil is a software engineer

Getting n number of lines after each match

root@localhost playground]# grep 'hello' file1.log
hello

root@localhost playground]# grep -A 1 'hello' file1.log
hello
i am sahil

root@localhost playground]# grep -A 2 'hello' file1.log
hello
i am sahil
i am software engineer

Getting n number of lines before each match

root@localhost playground]# grep 'i am sahil' file1.log
i am sahil

root@localhost playground]# grep -B 1 'i am sahil' file1.log
hello
i am sahil

root@localhost playground]# grep -B 2 'i am sahil' file1.log
hello
i am sahil

in the second case only one line is printed as it is the only line before our pattern

Get n lines after and m lines before every match

root@localhost playground]# grep -A 2 -B 1 'i am sahil' file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer

Get some word in more than one file in current directory

For this purpose we will assume we also have a second file “”file2.log”” in the same directory

root@localhost playground]# cat file2.log
hello
i am sahil
i am tech blogger
Sahil is a tech blogger
sahil is a tech blogger

Grep can be used to search in more than one file or within a directory

root@localhost playground]# grep 'sahil' file1.log file2.log
file1.log:i am sahil
file1.log:sahil is a software engineer
file2.log:i am sahil
file2.log:sahil is a tech blogger

Grep some word in all files in current directory

root@localhost playground]# grep 'sahil' *
file1.log:i am sahil
file1.log:sahil is a software engineer
file2.log:i am sahil
file2.log:sahil is a tech blogger

Check how many lines matched in each file

root@localhost playground]# grep -c 'sahil' *
file1.log:2
file2.log:2
file.log:0

Note : the above output signifies, we have a third file in the directory “”file.log””, but it has no lines that have a word “”sahil””

Grep using regular expression

Regular expressions are patterns used to match character combinations in strings

Suppose the content of files are as follows

root@localhost playground]# cat file3.log
time taken by api is 1211 ms
time taken by api is 2000 ms
time taken by api is 3000 ms
time taken by api is 4000 ms
time taken by api is 50000 ms
time taken by api is 123 ms
time taken by api is 213 ms
time taken by api is 456 ms
time taken by api is 1000 ms

Now suppose we want to grep all the lines in which time taken by any api is more than 1 second or more than 1000 ms , it means it should have minimum 4 digit number, grep command for this will be as follows

root@localhost playground]# grep -P '[0-9]{4} ms' file3.log
time taken by api is 1211 ms
time taken by api is 2000 ms
time taken by api is 3000 ms
time taken by api is 4000 ms
time taken by api is 50000 ms
time taken by api is 1000 ms

If we want to get 5 digit number

root@localhost playground]# grep -P '[0-9]{5} ms' file3.log
time taken by api is 50000 ms

Recursively search in a directory and sub directories

root@localhost playground]# grep -R 'sahil' .
./dir1/file.log:i am sahil
./dir1/file.log:sahil is a software engineer
./file1.log:i am sahil
./file1.log:sahil is a software engineer
./file2.log:i am sahil
./file2.log:sahil is a tech blogger

All above are basic use cases of grep . One can mix all the command options of grep to achieve the complex use cases and one can also mix different grep commands using pipe operator to achieve complex use cases

In future blogs we will explain some complex use cases and example how to achieve that using linux commands which can ease logs debugging.

Stay Tuned . . .

See Original Posts at hello-worlds

and at medium

and at dzone

..

Author

Sahil Aggarwal
Senior Software Engineer 3/Tech Lead at Ameyo

I am a Tech Professional working in IT industry from 10 years . I have worked on many technologies like java, databases, linux, netrowking, security and lot more . I generally believe in taking outside-in approach to any new product you are looking for . This behaviour helped me lot in debugging issues of new products or even issues of any other system throughout by journey. I like to write tech blogs and I have my personal blog sites also, I write on medium and dzone also . I am very adaptive working between Individual Contributor and as a Leader/Manager. Apart from technology, I like eating fast and junk food . I also like to listen and write shyari.