For example, let’s say a friend of yours at their home wants to remotely control your computer at your home using a service such as Remote Desktop. Now, Remote Desktop Connection is a service built into Microsoft Windows that enables you to connect to another computer running Microsoft Windows. Once you’re connected to the remote computer you can use that computers programs and files just as if you were sitting in front of it.
Your friend is ready to connect to your computer, so he starts up the Remote Desktop connection and sends the request to your public IP address with a specific port number. A port is a logical connection that is used by programs and services to exchange information. Ports uniquely identify these programs and services that are running on a computer.
In the case of Remote Desktop, it uses port 3389, so the request with that port number will make its way through the Internet to your router. Once it reaches your router, your router needs to know where to forward the request for port 3389. Without any port forwarding configured your friend will not be able to connect to your computer because your router does not know what to do with this request.
This is where port forwarding comes in. By setting up port forwarding we’re telling our router to send or forward any requests with specific port (3389 in our case) and send the request to specific computer. Usually this is done by to logging in to our routers configuration page by typing in the routers internal IP address using a web browser. If you’re not sure what your router internal private IP addresses is, you just open up a command prompt on a Windows computer and type in ‘ipconfig’ and press Enter. In the result, the ‘default gateway’ is the internal IP address of your router.
It’s also important to know what your computer’s IP address is, which can also be seen in the output of ‘ipconfig’ command (typically IPv4 Address). We’re going to be entering that IP address in the port forwarding configuration page.
Depending upon what router you’re using the configuration page will be different. Whatever type of router you’re using, you need to go to the port forwarding section to configure port forwarding. There you’ll typically enter a name for the application (remote desktop in our case). Then we have to forward the remote desktop connection port to the computer, so we would type in the remote desktop port number which happens to be 3389, and then enter the IP address of the computer that we want our friend to access. When we’re done, when our friend sends a request with port 3389, the router knows where to forward that request to.
In a network, the router is contacted with an IP address along with a port number. The router will look at the port number and send or forward the request to the internal IP address that the port has been configured to. Ports are always associated with an IP address and they are identified by a unique number. Regardless if you see the port number or not, a port number is always associated with an IP address. This is because an IP address always has a purpose and it uses the port number to determine what the purpose is. It always has a purpose that’s determined by the port number because not only our ports associated with an IP address but they are also always associated with an application or process such as FTP, web pages, email and so on.
The port numbers range from 0 to 65535 but of the 65,000, there’s a few of these that are most common and that are used on a daily basis. For example, port 80 which is used for web pages There’s also ports 20 and port 21 which are used for File Transfer Protocol or FTP. Port 443 which is used for bringing up secure web pages. In fact there is a privileged category of ports that are called the well-known ports which ranges from port 0 to 1023.]]>
Syslog is RFC 3164 standard, and since it’s standardized almost every device that you plug into a network these days can support a syslog functionality. The content that’s being sent from the devices is not standardized. The content that might come from a firewall will look very different than the content that might come from a server. Those types of systems have their own definition of the logs they’re sending in. Usually you’ll configure your syslog consolidation tool to understand and interpret the data properly whether it’s coming from a firewall, Windows server, Linux server or whether.
Syslog uses UDP 514 for messages transport. This means that the receiving a message is not guaranteed, but since there’s a lot of syslog data that gets sent and received, if you were to use TCP for everything, it would just be a ton of overhead. So, keep in mind that a message could get lost and you would not get a warning about it.
Within syslog there are eight severity levels, and the idea is that you can flag different entries in your syslog based on how important they are. Depending on the system you’re using it may use numbers, specific words or they may make up their own words, so it kind of depends. In general, levels start from level 0 (zero) and they are:
This is the kind of syslog data you can retrieve and collect in a central location, a collecting server. When you start with setup, start simple. Don’t send syslog from every system with every level to one server. This way you’re not going to be able to determine what you care about. You’re probably going to want to know level 4 to 0, or or even maybe level 3 to 0. Everything below you’re probably not going to care because this might lead to huge amount of data to process. So, pick and choose carefully.
The key is to find a way to centralize all of these logs into a single database, or a single consolidated view. This gives you a number of benefits, one of which is a centralized data store for all of your logs. If you ever need to gather or access any information, or to run any queries on your log, you know you’ve got it all in one place and it’s archived and backed up.
Another capability is that everything can be correlated together, meaning, you can view an entry in an authentication log that correlates to a flow of traffic through a firewall, which also correlates to somebody logging in and using an application on a server. Another nice capability is now that all of this information is in one place now, you can create log reports, like long term trends or similar. You can start to see changes throughout your network, things that you would never be able to see unless you had all of that data in one place.
This syslog consolidation server is going to need a lot of disk space since you’ll be picking out all of the different devices on the network. The more disk space that you’ll have the longer you’ll be able to go back in time and see exactly what was going on a month ago, three months ago, six months ago or perhaps even longer. Generally this server will have a lot of memory and CPU power, because you’re usually connecting to this to run reports, to query log information to get information as quickly as possible. Queries will go much faster if you have a lot of memory and a lot of CPU that you can dedicate to the queries and management of that log.
Syslog consolidation tools are more than just a gathering point. They generally have some advanced software associated with them that allows you to produce reports, to create graphs, to easily query the data, generate alerts like send out emails saying something went down, or similar.
There are a bunch of monitoring systems out there that can handle this, like the Kiwi Syslog, there’s Zenoss, there’s Nagios, and others.]]>
When you write an email using an email client, such as Microsoft Outlook, and when you hit send button, the email travels from your computer to your email server using the SMTP protocol. This server is known as the SMTP server and its address is what’s configured in your email client. For example, if you’re using Gmail, the SMTP server address would be smtp.gmail.com.
SMTP server will send the message to the recipients email server also using SMTP protocol. The email will stay on the recipients email server until the recipient logs into their email account and downloads the email using POP3 or IMAP protocol.
SMTP uses the TCP protocol, which is a connection-oriented protocol. This means that it guarantees the delivery of the email. This is assuming that the destination email address is correct and still exists. If, for some reason, the email that you sent does not reach its destination (maybe you misspelled the email address or the email address no longer exists), you’ll get that familiar mail delivery error in your mailbox informing you that the email you sent failed.
Like POP3 and IMAP protocols, SMTP protocol is also configured in your email client, in the outgoing server settings. This is also known as the SMTP server setting. This setting tells your email client where it can send the email.]]>
POP3 stands for Post Office Protocol 3. POP3 is the simpler of the two protocols because the only thing that POP3 does is download the email from a mail server to your device. It only downloads what’s in your inbox folder, which is where your email is.
It doesn’t download any other folders or their contents, like your sent items, your drafts, your deleted emails and so. Also, it doesn’t do any kind of synchronization. For example, let’s say that you have two computers that are configured to retrieve the same email account. By default, when you’re using POP3, the email will be deleted on the mail server once it’s downloaded to a device. No copy of the email is kept on the server. This means that when a new email comes in to the mail server, if one computer checks the mail server before other computer, the first computer will receive the email. All other configured computers will not because the email has already been downloaded and no copy of the email is kept on the server. However, most email clients will have a setting that you can check to leave a copy on the server. If this is the case, all of your devices can retrieve the email.
IMAP stands for Internet Message Access Protocol. IMAP is also used for retrieving email, but in a different way. IMAP allows you to view your email that’s on the server from multiple devices. The email is kept on the server and it caches local copies of the email on to all of your devices. It also synchronizes all of your folders and everything that’s in them. It syncs your inbox, sent items, deleted items, drafts and any custom folders that you may have created.
When you view your email on your computer, tablet, or smartphone, your email would be exactly the same because everything is synchronized. For example, if you have two computers configured for access to the same email account using IMAP, all the email and folders would be exactly the same between these two computers. If you delete an email on one computer, the email will be deleted on the mail server and then be deleted on the other computer also.
If any new emails come in, the email first goes to the mail server, and then, as the configured computers sync with the mail server, the new email will appear on all computers. If you make a custom folder with custom name, because of IMAP the folder and all of its contents will be added and synced to the server and any other configured computer or device like tablet or smartphone.
POP3 only downloads the contents of your inbox folder, it doesn’t do any email or folder syncing. IMAP syncs everything with all of your devices. Both POP3 and IMAP are configured in your email client in the incoming server settings. Which protocol to use depends on your situation.
|POP3||POP3 is good if you’re only going to retrieve your email from one device. The advantage of using POP3 is that since the email is downloaded to your device, you can view your downloaded email even if you don’t have an internet connection. The only time that you need an Internet connection with POP3 is when you’re receiving new email or sending email. Another advantage of POP3 is that it saves storage space on the mail server because the emails are deleted when they are downloaded to a device.||Disadvantage of POP3 is that since the emails are removed from the server and downloaded to your device, you would need a plan to backup your emails in case your device crashes or is lost. Another disadvantage is that your device has a better chance of being infected with viruses since the emails are fully downloaded.|
|IMAP||IMAP is good when you’re going to retrieve your email from multiple devices. All emails are stored on the mail server, so whether you’re accessing your email using an email client or webmail, you’ll be able to see all your email including your sent items, drafts, deleted items, and any custom folders. Also all the email and all folders are all synchronized so every device that you have will see the exact same thing.||Disadvantage of IMAP is that you will not be able to view your emails without an Internet connection. This is because IMAP only caches local copies of the email on your device and doesn’t download them. However some email clients will give you an option that you can check to have IMAP download the emails to your device instead of just cashing them.|
Supercomputers are great at number crunching to complete extremely complex tasks like weather forecasting, medical research and crypto analysis. With mainframes the focus is more on throughput and reliability. What exactly does that mean? Compared to a supercomputer, mainframes have a lot more inputs and outputs (I/O) because they have to process tons of smaller, simpler transactions extremely quickly.
Even though there is a misconception that mainframes are relics of a old computing era, even today most banks use mainframes to process millions of card swipes and account transfers that occurs daily. The dominant player in the mainframe industry for a very long time is IBM.
Building a mainframe isn’t just a matter of installing a ton of processors in a box, plugging in lots of Ethernet cables and calling it a day. Mainframes use special CPUs, many of which are much larger physically than even big desktop chips. Also, the have additional processors called SAP (System Assistance Processors) that do almost nothing but move data around as quickly as possible, like traffic controllers.
On a modern mainframe like the top-end IBM’s models each individual I/O card, of which there can be a hundred and sixty, has its own processing unit, up to two per channel on the dual channel cards, meaning you could have over 600 processor cores just for I/O, and that’s not even counting the SAPs.
The reason that mainframes are designed to support this much IO is to ensure that they stay reliable. Many of the subsystems inside a mainframe would have redundancies built-in. This means they can be deployed in situations where zero downtime is acceptable, such as the credit-card companies and retailers, as well as airline ticketing systems. In fact a common mainframe operating system IBM’s proprietary z/TPF was originally developed as transaction processing software for airlines.
This high level of redundancy means that it’s common for mainframes to be built in such a way where an administrator can slide out one of the drawers that houses components and simply start swapping them out. Whatever that drawer was working on is automatically transferred over to the rest of the mainframe making it easy to make necessary hardware changes without any downtime. High-end mainframes can run literary thousands of virtual servers, meaning taking down the mainframe could result in a lot of trouble for anyone running services on the mainframe.
Mainframes and their operating systems can cost hundreds of thousands if not millions of dollars. They also aren’t designed to run games or for high-end floating-point performance which is important for rendering graphics. Even so mainframes are still in the background powering lots of things you do every day.]]>
When choosing your PC’s second layer of protection, you will need to:
1. Search for all-inclusive protection. Even if it’s your second antivirus software, it’s best to get one that offers more than just virus protection. It should also offer protection against ransomware and cyber crime.
2. Look for real-time automaticity. The best antivirus software are those that can do the job without having to be prompted. That means updating automatically to get the latest improvements and scanning in real-time to detect and contain threats.
3. Find the extras. A lot of antivirus software comes bundled with additional tools, like password wallets and parental control options. Linsey Knerl from HP recommends getting an anti-virus with a VPN so that you can access a secure network both at home and when on the move. Look for one that has these tools included, so you can maximize the software’s use and hopefully end up paying less in the long run.
4. Read the reviews. Reviews can guide you in making your choice. So, leverage them as best you can. Chances are they’ll contain all the information you’ll need to shortlist your choices to the best two or three.
5. Check for compatibility. Finally, before you make a choice, take note that some antivirus software has system requirements. So, check first if your computer meets the requirements of your desired antivirus.
Given the above considerations, here are three antivirus software to consider:
If you’re looking for something free, Panda Antivirus may be appealing to you, as it is great for users on a tight budget. Not only is it free, but it also offers an excellent set of cybersecurity features such as real-time internet security, USB protection, and a free PC recovery system.
Free like Panda, Avast is considered as one of the best antivirus software for users who frequently visit sites that may harbor malware. It also features an expansive antivirus suite, which means it can offer comprehensive protection to your PC. Avast is even programmed to suspend non-critical functions and dismiss pop-ups, so that you can game or work without worrying about lag or annoying ads.
Bitdefender Antivirus Plus is Express’ antivirus of choice for Windows 10 as it scored 100% in 17 of 20 reports. This means that it offers exceptional protection against numerous viruses and malware. That’s thanks to Bitdefender’s impressive core antivirus engine that can detect and block viruses and malicious links. It also has a rescue mode, social media and online payment protection, and password management.
Don’t make choosing a chore
Don’t over complicate the process of choosing a second antivirus software. Follow the tips discussed above, consider the options given, then can you decide, download, and install the security software you need.]]>
The “ipconfig” displays the current information about your network such as your your IP and MAC address, and the IP address of your router. It can also display information about your DHCP and DNS servers. Let’s see the basic output of “ipconfig”:
Depending on your network connection type, you may see different output for different connection. For example, if you are connected to the network using Ethernet (you plug in your network cable to the RJ45 jack), you’ll see IP information in the “Ethernet adapter” section. In our case we are connected to the WIFI (wireless) connection so we our information there. In our case, the local IP (IPv4) of our computer is 192.168.8.103. We also see the Subnet Mask (255.255.255.0) which we can use to find the network address. We also see the Default Gateway IP (192.168.8.1), which is our router
However, we don’t see DHCP and DNS information. To see detailed IP information we can use the “/all” switch together with “ipconfig” command (ipconfig /all).
This time there’s much more information present. The IP address, the Subnet Mask and and the Default Gateway address is still here, but this time you can also see your DHCP server and DNS server. In our case the DHCP IP address is the same as the router address, which means that DHCP server is currently residing on the router. DNS server is also the same as router address which means it is also DNS server.
Information gathering is a part of troubleshooting. For example, if you’re trying to troubleshoot the DNS server, you can beforehand type in the “ipconfig” command and find where the DNS server is.
The “ping” command ping command allows you to send a signal to another device, and if that device is active, it will send a response back to the sender. The “ping” command is a subset of the ICMP (Internet Control Message Protocol), and it uses what is called an “echo request”. So, when you ping a device you send out an echo request, and if the device you pinged is active or online, you get an echo response.
For example, if your local computer has Internet connectivity issues, you can try to ping your router. If you get no response then you know that the router is what is giving you problems. Let’s ping our router IP, which is 192.168.8.1 in our example, and let’s analyze the the printout.
What happens is we send out four packets to the destination and the destination responds back with the same four packets. We sent out 32 bytes of data and we got back 32 bytes of data, and we got it back in 9 milliseconds average. From this we see that the device is alive and see the connection stability (4 of 4 packets received). Let us ping www.google.com and see what happens.
We got a similar printout, however, since we used domain name, we now see the resolved IP address of www.google.com. We sent out 32 bytes of data but, because Google server is far away it took 82 milliseconds to send and receive 4 packets from Google. We sent and received 4 packets so the connection was stable. Finally let’s ping a device that doesn’t exist.
We get a “Request timed out” response. This is going to yield the same kind of results if a device wasn’t actually working. As you can see at the summary, wee sent four and received zero, so it was a hundred percent lost. That means the system you’re trying to reach is not connected to the network.
This command lets you see all steps a packet takes to the destination. For example, if we send a packet to www.google.com, it actually goes through a couple of routers to reach the destination. The packet will first go to your router, and then it will go to all kinds of different routers before it reaches Google servers. We can also use the term “hops” instead of routers. Let’s run the command and see what kind of results we get.
We have traced the route to www.utilizewindows.com, and we’re getting a list of each of the routers that we’re hitting. At the end we see the IP address for utilizewindows.com server so the trace is complete. In our case we have 13 hops before we actually reached the intended server. The first router that we hit was our own router (we can tell by the IP address 192.168.8.1).
So what is the significance of this? Let’s say your home network was perfectly fine but there was a problem with some router in the between, for example with your ISP router. If there’s any problems it will try to indicate what the problem is. It could say things like “request timed out”, “destination unreachable” or similar. However, different messages don’t necessarily mean that there is a real problem with the device. There are several reasons why a “Request timed out” message may appear at the end of a trace route. This is typically because a device doesn’t respond to ICMP or traceroute requests. Also, the device firewall or other security device could be blocking the request. Here is article with more details about tracert command.
The nslookup command will fetch the DNS records for a given domain name or an IP address. Remember the IP addresses and domain names are stored in DNS servers, so the nslookup command lets you query the DNS records to gather information.
Let’s say you wanted to know the IP address of www.utilizewindows.com. You could simply type in nslookup and type in www.utilizewindows.com. Let’s analyze this printout.
The first two lines show you which DNS server was used to get these results. Our DNS server happens to reside on our router, so our router is also our DNS server. The answer that we got was the IP address of the www.utilizewindows.com server.]]>
Thing that most people forget is all that computers are doing is following instructions, repeating one instruction after another. Therefore, computers need software to do anything, even just to boot up. When we power on our computer or mobile phone, we end up on a user interface (desktop screen or launcher screen) where we can see apps on our device. But, there there’s lots going on underneath. For example, even if we don’t open any application ourselves, there is always a bunch of services running in the background. Below all that there is a kernel, and it is the core of the whole system.
Every multi-tasking computer operating system uses a kernel, including Windows, macOS, iOS, Android and all other Linux distributions. Now, Windows kernel is commonly referred to as NT kernel, macOS and iOS call it the Darwin kernel, and Android uses the Linux kernel. These aren’t the only kernels that are available. There are a whole plethora of kernels out there, with some being proprietary while other are open sources.
The kernel manages CPU and memory resources, system processes and manages device drivers. When looking at the system as a whole, kernel is the lowest layer above the available hardware. For example, when you start an app it is in fact the kernel that starts the process for that app and enables the app to be loaded. If that app, needs some memory, CPU or networking resources it will be the kernel that will allocate it to it. Eventually, when the app closes, all resources that it used will be tidied up by the kernel.
Kernels can be quite complicated since they’re doing a lot of essential work, and there are actually different ways in which kernels can be designed. The two main designs known today are what’s called a monolithic kernel and a micro kernel.
Linux is a monolithic kernel and that means that all of the kernel work is done inside one program that occupies one memory space. In the alternative micro kernel design the kernel itself is in a very very small piece of memory space and then other things like device drivers and networking and file system drivers are running as user level programs. The kernel communicates to them by giving them instructions. The idea being micro kernel design is that if one of those separate programs crashes, the kernel itself won’t automatically crash. Monolithic kernel can be quite big and complicated, and there are 15 million lines of code in the Linux kernel. Not all of that code is used at once, because there’s a whole range of different device drivers in there. In fact, the 70% of that 15 million lines of code is just device drivers.
When you build the Linux kernel, you say which bits you want and which you don’t want. Also, you do more than just say what you want – you can also tweak the way the operating system works. That makes sense because if you’re running Linux on a mainframe you might want it to behave differently than if you’re running it on a desktop, mobile, or a wearable device.
This is where we get into the idea of custom kernels because you can download the Linux software (it is open source under the GPL), it’s very possible to build your own kernel. There is a whole community of people that build custom kernels just for different devices (smartphones being the most popular ones). Kernel are highly configurable, so custom kernels may have extra features built into it compared to the standard stock kernel. Also, by tweaking them they can operate with different CPU, governors different IO schedulers, different priorities for different things and you might get a better kernel than the stock kernel. Companies like Samsung, Google, Apple, LG, Sony and others are spending millions of dollars developing smartphones and if they could get better battery life or better performance just by tweaking a thing in software, they certainly will. Of course, you can get better performance but you get less battery life, you can get better battery life but it means you have to run your processor at a slower speed, so companies are trying to find that medium where you get good battery life and good performance.
The kernel is the lowest level of any operating system that comes just above hardware. It is responsible for the CPU resources, the memory resources, for the drivers and for the networking. The kernel can tweak different parameters about how the scheduling occurs, how the IO scheduling occurs, and how the CPU is controlled.]]>
When using standard HTTP, all the information is sent in clear text. When we say all information, we mean all the information that is exchanged between your computer and that web server, which includes any text that you type on that website. Since that information is transferred over the public Internet in clear text, it’s vulnerable to eavesdropping.
Normally this is not be a big deal if you are just browsing regular websites and no sensitive data such as passwords or credit card information are being used. However, if you were to type in personal, sensitive data like your name address, phone number, passwords or credit card information, that sensitive data is vulnerable because a hacker can listen in as that data is being transferred and steal your information.
Since sending sensitive data using HTTP (clear text) represents a big security risk, HTTPS was developed. HTTPS stands for Secure Hypertext Transfer Protocol and this is HTTP with a security feature. Secure HTTP encrypts the data that being retrieved by HTTP. It ensures that all the data that’s being transferred over the Internet between computers and servers is secure by making the data impossible to read. It does this by using encryption algorithms to scramble the data that’s being transferred.
For example if you were to go to a website that requires you to enter personal information such as passwords or credit card numbers, you will notice that an S will be added to the HTTP in the web address, like https://www.utilizewindows.com. This ‘s’ in ‘https’ indicates that you are now using Secure HTTP and have entered a secure website where sensitive data is going to be protected. In addition to the ‘s’ being added, a lot of web browsers will also show a padlock symbol in the address bar to indicate that Secure HTTP is being used.
By using Secure HTTP all the data which includes anything that you type is no longer sent in clear text. Instead, it’s scrambled in an unreadable form as it travels across the internet. So, if a hacker were to try and steal your information he would get a bunch of meaningless data because the data is encrypted and the hacker would not be able to crack the encryption to unscramble the data.
Secure HTTP protects the data by using one of two protocols, and one of these protocols is SSL. SSL or Secure Sockets Layer is a protocol that’s used to ensure security on the Internet. It uses Public Key Encryption to secure data.
When a computer connects to a website that’s using SSL, the computer’s web browser will ask the website to identify itself. The web server will do that by sending a copy of its SSL certificate to your computer. An SSL certificate is a small digital certificate that is used to authenticate the identity of a website. Basically, it’s used to let your computer know that the website you’re visiting is trustworthy. The computer’s browser will check that certificate to make sure that it can trust the certificate. If it does, it will send a message to the web server saying that it trusts him. After that, the web server will respond back with an acknowledgment so that SSL session can proceed. After all these steps are complete, encrypted data can be exchanged between your computer and the web server.
The other protocol used to secure HTTP is called TLS. TLS or Transport Layer Security is the latest industry standard cryptographic protocol. It is the successor to SSL and it’s based on the same specifications. Like SSL, it also authenticates the server, client and encrypts the data.
It’s also important to point out that a lot of websites are now using secure HTTP by default on their websites regardless if sensitive data is going to be exchanged or not. This has to do with Google, because Google is now flagging websites as “not secure” if they are not protected with SSL. If a website is not SSL protected, Google will penalize that website in their search rankings. That’s why now if you go to any major website you’ll notice that Secure HTTP is being used rather than standard HTTP. The good thing about this is that you can get your certificates for free using the service like Let’s Encrypt.]]>
There’s a bit of a misconception that PCs are virus-prone. Instead, there are just so many PCs in the world that hackers generally target them instead of wasting time on operating systems with a tiny percentage of the market share like Mac OS and Linux. Fortunately, there are many powerful tools out there you can use to protect yourself. Windows come built-in with a pretty solid firewall and antivirus setup.
However, you should take it a step further and download anti-virus software. You can set it to run in the background during your normal usage, and it will not only scan for viruses but also malware and other nasty things trying to steal data or slow down your new CPU.
The vast majority of all computer hacks occur over the internet. For this reason, you must protect your internet connection. With a VPN, you encrypt and anonymize your connection with the latest in AES-256 technology — the highest industry standard. If you use it, no one can intercept your connections, even on public unprotected WiFi. And since your real IP is hidden, no advertisers or platforms like Facebook and Google can keep track of your browsing habits to sell your data to the highest bidder.
Cloud computing has become an integral part of the PC experience. It touches all aspects of our computer use. Whether we are uploading photos to iCloud, sharing business documents on Google Drive, or sending files via Dropbox, it’s everywhere. Unfortunately, these services aren’t completely secure, as hackers have broken into users’ accounts on many occasions.
With file encryption software like Nordlocker, you can not only protect the files on your computer but anything you upload to the Cloud. It’s easy to use as well. All you do is install and select which files to encrypt. You can utilize additional features like selecting who can view/edit the file and set temporary passcodes.
Built-in Windows apps have gotten significantly better over the last few years. Nonetheless, many of us are still wary of the days of Internet Explorer and other default Windows apps that simply couldn’t get the job done. Whether you choose to go with Edge or Chrome, set up your apps in the “default apps” settings.
This is particularly important for files that Windows may not be able to read without an additional driver or download. For example, Windows Media Player can’t read all media formats so you might want to go with VLC instead. Choose your favorites and go from there.
Password managers are revolutionizing the way we secure our online accounts. The average person uses over fifteen login credentials across various platforms and websites. Yet, most of us are guilty of using the same password (or a very similar one) across all of them. Password managers create unique, extremely strong passwords for each of your accounts. All you have to do is login into the password manager of your choice and it will login to all your other accounts from there.
What’s more, should a cyber-criminal try to gain access to one of your accounts, you’ll get a notification right away so you can shut it down immediately. Some password managers are even going a step further and using biometric security features like facial recognition and fingerprint IDs to create an even more secure experience.
Even if you are first in line at the Lenovo or Dell store for a new PC, chances are Windows will already have an update waiting for you. As frustrating as this is, these updates are for your own security. Hackers are constantly trying to exploit flaws in the Windows OS to gain access to people’s data. Microsoft is quick to find this and usually releases a security patch right away. But it also means that users need to be ready to install these updates.
While it’s not exactly fun to watch an update screen as soon as you unwrap your new PC, you’ll be happy to not have a cybersecurity breach in the future. The same goes for all your other apps as well. Make sure everything is always up to date. You can set Windows to run automatic updates in the background at times you don’t use your computer, like overnight or during lunch hour, so they never get in your way.
Unboxing a new PC is a thrilling experience for all computer users alike. However, if you want to make sure your new computer is truly ready to use, then make sure you follow these essential steps. With Windows, everything comes back to security. You don’t want hackers installing a piece a malware that hogs your CPU performance and internet bandwidth to mine cryptocurrency, do you?
So, install an antivirus software and a VPN to protect your internet connection. Secure your files both online and offline with a file encryption service like NordLocker. Get everything calibrated by setting your default apps. Finally, make sure Windows OS and your apps are always up to date. By following these steps, you’ll ensure your PC will run smoothly for many years to come.]]>