Pfsense on VirtualBox


Shellshock (bash vulnerability/bash bug)(Deadly serious’ new vulnerability found)(All OS X and Linux systems wide open)

A new vulnerability has been found that potentially affects most versions of the Linux and Unix operating systems, in addition to Mac OS X (which is based around Unix). Known as the “Bash Bug” or “ShellShock,” the GNU Bash Remote Code Execution Vulnerability (CVE-2014-6271) could allow an attacker to gain control over a targeted computer if exploited successfully.

The vulnerability affects Bash, a common component known as a shell that appears in many versions of Linux and Unix. Bash acts as a command language interpreter. In other words, it allows the user to type commands into a simple text-based window, which the operating system will then run.

Bash can also be used to run commands passed to it by applications and it is this feature that the vulnerability affects. One type of command that can be sent to Bash allows environment variables to be set. Environment variables are dynamic, named values that affect the way processes are run on a computer. The vulnerability lies in the fact that an attacker can tack-on malicious code to the environment variable, which will run once the variable is received.

Symantec regards this vulnerability as critical, since Bash is widely used in Linux and Unix operating systems running on Internet-connected computers, such as Web servers. Although specific conditions need to be in place for the bug to be exploited, successful exploitation could enable remote code execution. This could not only allow an attacker to steal data from a compromised computer, but enable the attacker to gain control over the computer and potentially provide them with access to other computers on the affected network.

The following video provides an explanation of the Bash Bug vulnerability and demonstrates how a likely attack scenario through the CGI interface may work:

Has it been exploited yet?
There are limited reports of the vulnerability being used by attackers in the wild. Proof-of-concept scripts have already been developed by security researchers. In addition to this, a module has been created for the Metasploit Framework, which is used for penetration testing.

Once the vulnerability has been made public, it was only a matter of time before attackers attempted to find and exploit unpatched computers.

How can it be exploited?
While the vulnerability potentially affects any computer running Bash, it can only be exploited by a remote attacker in certain circumstances. For a successful attack to occur, an attacker needs to force an application to send a malicious environment variable to Bash.

The most likely route of attack is through Web servers utilizing CGI (Common Gateway Interface), the widely-used system for generating dynamic Web content. An attacker can potentially use CGI to send a malformed environment variable to a vulnerable Web server. Because the server uses Bash to interpret the variable, it will also run any malicious command tacked-on to it.

Figure 1. How a malicious command can be tacked-on to the end of a legitimate environment variable. Bash will run the malicious command first.

The consequences of an attacker successfully exploiting this vulnerability on a Web server are serious in nature. For example attackers may have the ability to dump password files or download malware on to infected computers. Once inside the victim’s firewall, the attackers could then compromise and infect other computers on the network.

Aside from Web servers, other vulnerable devices include Linux-based routers that have a Web interface that uses CGI. In the same manner as an attack against a Web server, it may be possible to use CGI to exploit the vulnerability and send a malicious command to the router.

Computers running Mac OS X are also potentially vulnerable until Apple releases a patch for the vulnerability. Again, attackers would need to find a way to pass malformed commands to Bash on the targeted Mac. The most likely avenue of attack against OS X would probably be through Secure Shell (SSH), a secure communications protocol. However, it appears that the attacker would need to have valid SSH credentials to perform the attack. In other words, they would already have to be logged in to an SSH session.

Internet of Things (IoT) and embedded devices such as routers may be vulnerable if they’re running Bash. However, many newer devices run a set of tools called BusyBox which offers an alternative to Bash. Devices running BusyBox are not vulnerable to the Bash Bug.

For website owners and businesses
Businesses, in particular website owners, are most at risk from this bug and should be aware that its exploitation may allow access to their data and provide attackers with a foothold on their network. Accordingly, it is of critical importance to apply any available patches immediately.

Linux vendors have issued security advisories for the newly discovered vulnerability including patching information.

*Red Hat has updated its advisory to include fixes for a number of remaining issues.

If a patch is unavailable for a specific distribution of Linux or Unix, it is recommended that users switch to an alternative shell until one becomes available.

For consumers
Consumers are advised to apply patches to routers and any other web-enabled devices as and when they become available from vendors. Users of Apple’s Mac OS X should be aware that the operating system currently ships with a vulnerable version of Bash. Mac users should apply any patches for OS X when they become available.

Symantec Protection
Symantec has created an Intrusion Prevention signature for protection against this vulnerability:

Symantec will continue to investigate this vulnerability and provide more details as they become available.

Recommended size in percentage for each partition (Ubuntu / Linux)

I have seen most of dedicated hosting companies servers always have multiple partition for various folders. I have tried to follow some guide lines on my own on virtual box. I always use swap space as double the ram. let’s say

TS = total Size
SS = Swap Size
MS = Main Size

MS = TS - SS

What is bellow is percentage on MS.

/         20%
/boot     100M
/var      25%
/home     24%
/usr      10%
/tmp      200M
/opt      10%

it looks like I still need some key places I should give more space and some other places I should reduce space for example /usr and /var.

9.15.5. Recommended Partitioning Scheme x86, AMD64, and Intel 64 systems

We recommend that you create the following partitions for x86, AMD64, and Intel 64 systems:
  • A swap partition
  • A /boot partition
  • A / partition
  • A home partition
  • A swap partition (at least 256 MB) — Swap partitions support virtual memory: data is written to a swap partition when there is not enough RAM to store the data your system is processing.
    In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. Modern systems often include hundreds of gigabytes of RAM, however. As a consequence, recommended swap space is considered a function of system memory workload, not system memory.
    The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage.

    Table 9.2. Recommended System Swap Space

    Amount of RAM in the system Recommended swap space Recommended swap space if allowing for hibernation
    ⩽ 2GB 2 times the amount of RAM 3 times the amount of RAM
    > 2GB – 8GB Equal to the amount of RAM 2 times the amount of RAM
    > 8GB – 64GB 0.5 times the amount of RAM 1.5 times the amount of RAM
    > 64GB 4GB of swap space No extra space needed

    At the border between each range listed above (for example, a system with 2GB, 8GB, or 64GB of system RAM), discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance.
    Note that distributing swap space over multiple storage devices — particularly on systems with fast drives, controllers and interfaces — also improves swap space performance.


    Swap space size recommendations issued for Red Hat Enterprise Linux 6.0, 6.1, and 6.2 differed from the current recommendations, which were first issued with the release of Red Hat Enterprise Linux 6.3 in June 2012 and did not account for hibernation space. Automatic installations of these earlier versions of Red Hat Enterprise Linux 6 still generate a swap space in line with these superseded recommendations. However, manually selecting a swap space size in line with the newer recommendations issued for Red Hat Enterprise Linux 6.3 is advisable for optimal performance.
  • A /boot/ partition (250 MB)

    The partition mounted on /boot/ contains the operating system kernel (which allows your system to boot Red Hat Enterprise Linux), along with files used during the bootstrap process. For most users, a 250 MB boot partition is sufficient.

    Important — Supported file systems

    The GRUB bootloader in Red Hat Enterprise Linux 6.5 supports only the ext2, ext3, and ext4 (recommended) file systems. You cannot use any other file system for /boot, such as Btrfs, XFS, or VFAT.


    Note that normally the /boot partition is created automatically by the installer. However, if the / (root) partition is larger than 2 TB and (U)EFI is used for booting, you need to create a separate /boot partition that is smaller than 2 TB to boot the machine successfully.


    If your hard drive is more than 1024 cylinders (and your system was manufactured more than two years ago), you may need to create a /boot/ partition if you want the / (root) partition to use all of the remaining space on your hard drive.


    If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In cases such as these, the /boot/ partition must be created on a partition outside of the RAID array, such as on a separate hard drive.
  • A root partition (3.0 GB – 5.0 GB) — this is where “/” (the root directory) is located. In this setup, all files (except those stored in /boot) are on the root partition.
    A 3.0 GB partition allows you to install a minimal installation, while a 5.0 GB root partition lets you perform a full installation, choosing all package groups.

    Root and /root

    The / (or root) partition is the top of the directory structure. The /root directory (sometimes pronounced “slash-root”) is the home directory of the user account for system administration.
  • A home partition (at least 100 MB)

    To store user data separately from system data, create a dedicated partition within a volume group for the /home directory. This will enable you to upgrade or reinstall Red Hat Enterprise Linux without erasing user data files.

Many systems have more partitions than the minimum listed above. Choose partitions based on your particular system needs. Refer to Section, “Advice on Partitions” for more information.
If you create many partitions instead of one large / partition, upgrades become easier. Refer to the description of the Edit option in Section 9.15, “ Creating a Custom Layout or Modifying the Default Layout ” for more information.
The following table summarizes minimum partition sizes for the partitions containing the listed directories. You do not have to make a separate partition for each of these directories. For instance, if the partition containing /foo must be at least 500 MB, and you do not make a separate /foo partition, then the / (root) partition must be at least 500 MB.

Table 9.3. Minimum partition sizes

Directory Minimum size
/ 250 MB
/usr 250 MB, but avoid placing this on a separate partition
/tmp 50 MB
/var 384 MB
/home 100 MB
/boot 250 MB

Leave Excess Capacity Unallocated

Only assign storage capacity to those partitions you require immediately. You may allocate free space at any time, to meet needs as they occur. To learn about a more flexible method for storage management, refer to Appendix D, Understanding LVM.
If you are not sure how best to configure the partitions for your computer, accept the default partition layout. Advice on Partitions
Optimal partition setup depends on the usage for the Linux system in question. The following tips may help you decide how to allocate your disk space.
  • Consider encrypting any partitions that might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the /homepartition.
  • Each kernel installed on your system requires approximately 10 MB on the /bootpartition. Unless you plan to install a great many kernels, the default partition size of 250 MB for /boot should suffice.

    Important — Supported file systems

    The GRUB bootloader in Red Hat Enterprise Linux 6.5 supports only the ext2, ext3, and ext4 (recommended) file systems. You cannot use any other file system for /boot, such as Btrfs, XFS, or VFAT.
  • The /var directory holds content for a number of applications, including the Apache web server. It also is used to store downloaded update packages on a temporary basis. Ensure that the partition containing the /var directory has enough space to download pending updates and hold your other content.


    The PackageKit update software downloads updated packages to /var/cache/yum/ by default. If you partition the system manually, and create a separate /var/ partition, be sure to create the partition large enough (3.0 GB or more) to download package updates.
  • The /usr directory holds the majority of software content on a Red Hat Enterprise Linux system. For an installation of the default set of software, allocate at least 4 GB of space. If you are a software developer or plan to use your Red Hat Enterprise Linux system to learn software development skills, you may want to at least double this allocation.

    Do not place /usr on a separate partition

    If /usr is partitioned separately from the rest of the root volume, the boot process becomes much more complex because /usr contains boot-critical components. In some situations, such as when installing on an iSCSI drive, the system will not boot.
  • Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other partitions to reallocate storage.
  • If you separate subdirectories into partitions, you can retain content in those subdirectories if you decide to install a new version of Red Hat Enterprise Linux over your current system. For instance, if you intend to run a MySQL database in /var/lib/mysql, make a separate partition for that directory in case you need to reinstall later.
  • UEFI systems should contain a 50-150MB /boot/efi partition with an EFI System Partition filesystem.
The following table is a possible partition setup for a system with a single, new 80 GB hard disk and 1 GB of RAM. Note that approximately 10 GB of the volume group is unallocated to allow for future growth.

Example Usage

This setup is not optimal for all use cases.

Example 9.1. Example partition setup

Table 9.4. Example partition setup

Partition Size and type
/boot 250 MB ext3 partition
swap 2 GB swap
LVM physical volume Remaining space, as one LVM volume group
The physical volume is assigned to the default volume group and divided into the following logical volumes:

Table 9.5. Example partition setup: LVM physical volume

Partition Size and type
/ 13 GB ext4
/var 4 GB ext4
/home 50 GB ext4

iperf (tool to measure the bandwidth and the quality of a network link)

Iperf is a tool to measure the bandwidth and the quality of a network link.

The network link is delimited by two hosts running Iperf.

The quality of a link can be tested as follows:
– Latency (response time or RTT): can be measured with the Ping command.
– Jitter (latency variation): can be measured with an Iperf UDP test.
– Datagram loss: can be measured with an Iperf UDP test.

The bandwidth is measured through TCP tests.

To be clear, the difference between TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) is that TCP use processes to check that the packets are correctly sent to the receiver whereas with UDP the packets are sent without any checks but with the advantage of being quicker than TCP.
Iperf uses the different capacities of TCP and UDP to provide statistics about network links.

Finally, Iperf can be installed very easily on any UNIX/Linux or Microsoft Windows system. One host must be set as client, the other one as server.

Here is a diagram where Iperf is installed on a Linux and Microsoft Windows machine.
Linux is used as the Iperf client and Windows as the Iperf server. Of course, it is also possible to use two Linux boxes.

screenshot Iperf bandwidth measure client server


no arg.
Default settings
Data format
Bi-directional bandwidth
Simultaneous bi-directional bandwidth
TCP Window size
-p, -t, -i
-u, -b
Port, timing and interval
UDP tests, bandwidth settings
Maximum Segment Size display
Maximum Segment Size settings
Parallel tests

By default, the Iperf client connects to the Iperf server on the TCP port 5001 and the bandwidth displayed by Iperf is the bandwidth from the client to the server.
If you want to use UDP tests, use the -u argument.
The -d and -r Iperf client arguments measure the bi-directional bandwidths. (See further on this tutorial)

 Client side:

#iperf -c

Client connecting to, TCP port 5001
TCP window size: 16384 Byte (default)
[ 3] local port 33453 connected with port 5001
[ 3]   0.0-10.2 sec   1.26 MBytes   1.05 Mbits/sec 

 Server side:

#iperf -s

Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
[852] local port 5001 connected with port 33453
[ ID]   Interval          Transfer       Bandwidth
[852]   0.0-10.6 sec   1.26 MBytes   1.03 Mbits/sec 


 Data formatting: (-f argument)

The -f argument can display the results in the desired format: bits(b), bytes(B), kilobits(k), kilobytes(K), megabits(m), megabytes(M), gigabits(g) or gigabytes(G).
Generally the bandwidth measures are displayed in bits (or Kilobits, etc …) and an amount of data is displayed in bytes (or Kilobytes, etc …).
As a reminder, 1 byte is equal to 8 bits and, in the computer science world, 1 kilo is equal to 1024 (2^10).
For example: 100’000’000 bytes is not equal to 100 Mbytes but to 100’000’000/1024/1024 = 95.37 Mbytes.

 Client side:

#iperf -c -f b

Client connecting to, TCP port 5001
TCP window size: 16384 Byte (default)
[ 3] local port 54953 connected with port 5001
[ 3]   0.0-10.2 sec   1359872 Bytes   1064272 bits/sec 

 Server side:

#iperf -s

Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
[852] local port 5001 connected with port 33453
[ ID]   Interval          Transfer       Bandwidth
[852]   0.0-10.6 sec   920 KBytes   711 Kbits/sec 

 Bi-directional bandwidth measurement: (-r argument)

The Iperf server connects back to the client allowing the bi-directional bandwidth measurement. By default, only the bandwidth from the client to the server is measured.
If you want to measure the bi-directional bandwidth simultaneously, use the -d keyword. (See next test.)

 Client side:

#iperf -c -r

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
Client connecting to, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 5] local port 35726 connected with port 5001
[ 5]   0.0-10.0 sec   1.12 MBytes   936 Kbits/sec
[ 4] local port 5001 connected with port 1640
[ 4]   0.0-10.1 sec   74.2 MBytes   61.7 Mbits/sec 

 Server side:

#iperf -s

Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
[852] local port 5001 connected with port 54355
[ ID]   Interval          Transfer        Bandwidth
[852]   0.0-10.1 sec   1.15 MBytes   956 Kbits/sec
Client connecting to, TCP port 5001
TCP window size: 8.00 KByte (default)
[824] local port 1646 connected with port 5001
[ ID]   Interval          Transfer        Bandwidth
[824]   0.0-10.0 sec   73.3 MBytes   61.4 Mbits/sec 

 Simultaneous bi-directional bandwidth measurement: (-d argument)
Also check the “Jperf” section.

To measure the bi-directional bandwidths simultaneousely, use the -d argument. If you want to test the bandwidths sequentially, use the -r argument (see previous test).
By default (ie: without the -r or -d arguments), only the bandwidth from the client to the server is measured.

 Client side:

#iperf -c -d

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
Client connecting to, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 5] local port 60270 connected with port 5001
[ 4] local port 5001 connected with port 2643
[ 4] 0.0-10.0 sec 76.3 MBytes 63.9 Mbits/sec
[ 5] 0.0-10.1 sec 1.55 MBytes 1.29 Mbits/sec 

 Server side:

#iperf -s

Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
[852] local port 5001 connected with port 60270
Client connecting to, TCP port 5001
TCP window size: 8.00 KByte (default)
[800] local port 2643 connected with port 5001
[ ID]   Interval          Transfer       Bandwidth
[800]   0.0-10.0 sec   76.3 MBytes   63.9 Mbits/sec
[852]   0.0-10.1 sec   1.55 MBytes   1.29 Mbits/sec

 TCP Window size: (-w argument)

The TCP window size is the amount of data that can be buffered during a connection without a validation from the receiver.
It can be between 2 and 65,535 bytes.
On Linux systems, when specifying a TCP buffer size with the -w argument, the kernel allocates double as much as indicated.

 Client side:

#iperf -c -w 2000

WARNING: TCP window size set to 2000 bytes. A small window size
will give poor performance. See the Iperf documentation.
Client connecting to, TCP port 5001
TCP window size: 3.91 KByte (WARNING: requested 1.95 KByte)
[ 3] local port 51400 connected with port 5001
[ 3]   0.0-10.1 sec   704 KBytes   572 Kbits/sec

 Server side:

#iperf -s -w 4000

Server listening on TCP port 5001
TCP window size: 3.91 KByte
[852] local port 5001 connected with port 51400
[ ID]   Interval          Transfer       Bandwidth
[852]   0.0-10.1 sec   704 KBytes   570 Kbits/sec



 Communication port (-p), timing (-t) and interval (-i):

The Iperf server communication port can be changed with the -p argument. It must be configured on the client and the server with the same value, default is TCP port 5001.
The -t argument specifies the test duration time in seconds, default is 10 secs.
The -i argument indicates the interval in seconds between periodic bandwidth reports.

 Client side:

#iperf -c -p 12000 -t 20 -i 2

Client connecting to, TCP port 12000
TCP window size: 16.0 KByte (default)
[ 3] local port 58316 connected with port 12000
[ 3]    0.0- 2.0 sec    224 KBytes    918 Kbits/sec
[ 3]    2.0- 4.0 sec    368 KBytes    1.51 Mbits/sec
[ 3]    4.0- 6.0 sec    704 KBytes    2.88 Mbits/sec
[ 3]    6.0- 8.0 sec    280 KBytes    1.15 Mbits/sec
[ 3]    8.0-10.0 sec    208 KBytes    852 Kbits/sec
[ 3]   10.0-12.0 sec   344 KBytes    1.41 Mbits/sec
[ 3]   12.0-14.0 sec   208 KBytes    852 Kbits/sec
[ 3]   14.0-16.0 sec   232 KBytes    950 Kbits/sec
[ 3]   16.0-18.0 sec   232 KBytes    950 Kbits/sec
[ 3]   18.0-20.0 sec   264 KBytes    1.08 Mbits/sec
[ 3]    0.0-20.1 sec   3.00 MBytes   1.25 Mbits/sec 

 Server side:

#iperf -s -p 12000

Server listening on TCP port 12000
TCP window size: 8.00 KByte (default)
[852] local port 12000 connected with port 58316
[ ID] Interval Transfer Bandwidth
[852]   0.0-20.1 sec   3.00 MBytes   1.25 Mbits/sec

 UDP tests: (-u), bandwidth settings (-b)
Also check the “Jperf” section.

The UDP tests with the -u argument will give invaluable information about the jitter and the packet loss. If you don’t specify the -u argument, Iperf uses TCP.
To keep a good link quality, the packet loss should not go over 1 %. A high packet loss rate will generate a lot of TCP segment retransmissions which will affect the bandwidth.

The jitter is basically the latency variation and does not depend on the latency. You can have high response times and a very low jitter. The jitter value is particularly important on network links supporting voice over IP (VoIP) because a high jitter can break a call.
The -b argument allows the allocation if the desired bandwidth.

 Client side:

#iperf -c -u -b 10m

Client connecting to, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 108 KByte (default)
[ 3] local port 32781 connected with port 5001
[ 3]   0.0-10.0 sec   11.8 MBytes   9.89 Mbits/sec
[ 3] Sent 8409 datagrams
[ 3] Server Report:
[ 3]   0.0-10.0 sec   11.8 MBytes   9.86 Mbits/sec   2.617 ms   9/ 8409   (0.11%) 

 Server side:

#iperf -s -u -i 1

Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
[904] local port 5001 connected with port 32781
[ ID]   Interval         Transfer        Bandwidth         Jitter        Lost/Total Datagrams
[904]   0.0- 1.0 sec   1.17 MBytes   9.84 Mbits/sec   1.830 ms   0/ 837   (0%)
[904]   1.0- 2.0 sec   1.18 MBytes   9.94 Mbits/sec   1.846 ms   5/ 850   (0.59%)
[904]   2.0- 3.0 sec   1.19 MBytes   9.98 Mbits/sec   1.802 ms   2/ 851   (0.24%)
[904]   3.0- 4.0 sec   1.19 MBytes   10.0 Mbits/sec   1.830 ms   0/ 850   (0%)
[904]   4.0- 5.0 sec   1.19 MBytes   9.98 Mbits/sec   1.846 ms   1/ 850   (0.12%)
[904]   5.0- 6.0 sec   1.19 MBytes   10.0 Mbits/sec   1.806 ms   0/ 851   (0%)
[904]   6.0- 7.0 sec   1.06 MBytes   8.87 Mbits/sec   1.803 ms   1/ 755   (0.13%)
[904]   7.0- 8.0 sec   1.19 MBytes   10.0 Mbits/sec   1.831 ms   0/ 850   (0%)
[904]   8.0- 9.0 sec   1.19 MBytes   10.0 Mbits/sec   1.841 ms   0/ 850   (0%)
[904]   9.0-10.0 sec   1.19 MBytes   10.0 Mbits/sec   1.801 ms   0/ 851   (0%)
[904]   0.0-10.0 sec   11.8 MBytes   9.86 Mbits/sec   2.618 ms   9/ 8409  (0.11%) 

 Maximum Segment Size (-m argument) display:

The Maximum Segment Size (MSS) is the largest amount of data, in bytes, that a computer can support in a single, unfragmented TCP segment.
It can be calculated as follows:
MSS = MTU – TCP & IP headers
The TCP & IP headers are equal to 40 bytes.
The MTU or Maximum Transmission Unit is the greatest amount of data that can be transferred in a frame.
Here are some default MTU size for different network topology:
Ethernet – 1500 bytes: used in a LAN.
PPPoE – 1492 bytes: used on ADSL links.
Token Ring (16Mb/sec) – 17914 bytes: old technology developed by IBM.
Dial-up – 576 bytes

Generally, a higher MTU (and MSS) brings higher bandwidth efficiency

 Client side:

#iperf -c -m

Client connecting to, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 3] local port 41532 connected with port 5001
[ 3]   0.0-10.2 sec   1.27 MBytes   1.04 Mbits/sec
[ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

Here the MSS is not equal to 1500 – 40 but to 1500 – 40 – 12 (Timestamps option) = 1448

 Server side:

#iperf -s


 Maximum Segment Size (-M argument) settings:

Use the -M argument to change the MSS. (See the previous test for more explanations about the MSS)

#iperf -c -M 1300 -m

WARNING: attempt to set TCP maximum segment size to 1300, but got 536
Client connecting to, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 3] local port 41533 connected with port 5001
[ 3]   0.0-10.1 sec   4.29 MBytes   3.58 Mbits/sec
[ 3] MSS size 1288 bytes (MTU 1328 bytes, unknown interface) 

 Server side:

#iperf -s


 Parallel tests (-P argument):

Use the -P argument to run parallel tests.

 Client side:

#iperf -c -P 2

Client connecting to, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 3] local port 41534 connected with port 5001
[ 4] local port 41535 connected with port 5001
[ 4]     0.0-10.1 sec   1.35 MBytes   1.12 Mbits/sec
[ 3]     0.0-10.1 sec   1.35 MBytes   1.12 Mbits/sec
[SUM]  0.0-10.1 sec   2.70 MBytes   2.24 Mbits/sec 

 Server side:

#iperf -s


Iperf on Windows

Iperf is a neat little tool with the simple goal of helping administrators measure the performance of their network. Worthy of mention is the fact that it can measure both TCP and UDP performance on a network. Iperf is cross platform software and open source.

You can download Iperf.exe from:


Link updated on 12/30/2010

We will be making use of the command line, do not fear the command line Iperf is a simple tool to use.

Say I want to test the available bandwidth between a server(Windows Server 2008) and a client workstation(Windows 7). Iperf will try to move as much data as possible using the available link in order to conduct the test.



Download the Iperf executable and place the file on any directory you wish, my web browser(Firefox) places all downloaded files on the Download directory which is where I will be executing Iperf from.

Note:You will need to open port 5001 on the Iperf server.

Server Setup

Go to Start All Programs > Accessories > Command Prompt


With the command line prompt open type

cd Dowloads

or the location where the Iperf executable resides.


Now that you are in the same directory as Iperf type

iperf -s

to start the Iperf server. If you look at the screen Iperf listens on port 5001 you may have to open port 5001 on your firewall.


Client Set Up

Imitating the steps above execute Iperf in the same manner, but this time we are going to give the Iperf client different instructions. On the Iperf client command line type

iperf -c

. This will be our client and we are telling Iperf the server is located at


Give Iperf some time to test the connection, after the test is done Iperf will present the results.


The results are easy to understand in this case Iperf managed to transfer 113 Mbytes at 94.5 Mbit/s, the results will changed when used on a busy network which is where Iperf will reveal the amount of available bandwidth in the network.


5 commands to check memory usage on Linux

1. free command

The free command is the most simple and easy to use command to check memory usage on linux. Here is a quick example

$ free -m
             total       used       free     shared    buffers     cached
Mem:          7976       6459       1517          0        865       2248
-/+ buffers/cache:       3344       4631
Swap:         1951          0       1951

The m option displays all data in MBs. The total os 7976 MB is the total amount of RAM installed on the system, that is 8GB. The used column shows the amount of RAM that has been used by linux, in this case around 6.4 GB. The output is pretty self explanatory. The catch over here is the cached and buffers column. The second line tells that 4.6 GB is free. This is the free memory in first line added with the buffers and cached amount of memory.

Linux has the habit of caching lots of things for faster performance, so that memory can be freed and used if needed.
The last line is the swap memory, which in this case is lying entirely free.

2. /proc/meminfo

The next way to check memory usage is to read the /proc/meminfo file. Know that the /proc file system does not contain real files. They are rather virtual files that contain dynamic information about the kernel and the system.

$ cat /proc/meminfo
MemTotal:        8167848 kB
MemFree:         1409696 kB
Buffers:          961452 kB
Cached:          2347236 kB
SwapCached:            0 kB
Active:          3124752 kB
Inactive:        2781308 kB
Active(anon):    2603376 kB
Inactive(anon):   309056 kB
Active(file):     521376 kB
Inactive(file):  2472252 kB
Unevictable:        5864 kB
Mlocked:            5880 kB
SwapTotal:       1998844 kB
SwapFree:        1998844 kB
Dirty:              7180 kB
Writeback:             0 kB
AnonPages:       2603272 kB
Mapped:           788380 kB
Shmem:            311596 kB
Slab:             200468 kB
SReclaimable:     151760 kB
SUnreclaim:        48708 kB
KernelStack:        6488 kB
PageTables:        78592 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6082768 kB
Committed_AS:    9397536 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      420204 kB
VmallocChunk:   34359311104 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB                                                                                                                           
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       62464 kB
DirectMap2M:     8316928 kB


Check the values of MemTotal, MemFree, Buffers, Cached, SwapTotal, SwapFree.
They indicate same values of memory usage as the free command.

3. vmstat

The vmstat command with the s option, lays out the memory usage statistics much like the proc command. Here is an example

$ vmstat -s
      8167848 K total memory
      7449376 K used memory
      3423872 K active memory
      3140312 K inactive memory
       718472 K free memory
      1154464 K buffer memory
      2422876 K swap cache
      1998844 K total swap
            0 K used swap
      1998844 K free swap
       392650 non-nice user cpu ticks
         8073 nice user cpu ticks
        83959 system cpu ticks
     10448341 idle cpu ticks
        91904 IO-wait cpu ticks
            0 IRQ cpu ticks
         2189 softirq cpu ticks
            0 stolen cpu ticks
      2042603 pages paged in
      2614057 pages paged out
            0 pages swapped in
            0 pages swapped out
     42301605 interrupts
     94581566 CPU context switches
   1382755972 boot time
         8567 forks

The top few lines indicate total memory, free memory etc and so on.

4. top command

The top command is generally used to check memory and cpu usage per process. However it also reports total memory usage and can be used to monitor the total RAM usage. The header on output has the required information. Here is a sample output

top - 15:20:30 up  6:57,  5 users,  load average: 0.64, 0.44, 0.33
Tasks: 265 total,   1 running, 263 sleeping,   0 stopped,   1 zombie
%Cpu(s):  7.8 us,  2.4 sy,  0.0 ni, 88.9 id,  0.9 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   8167848 total,  6642360 used,  1525488 free,  1026876 buffers
KiB Swap:  1998844 total,        0 used,  1998844 free,  2138148 cached

  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND                                                                                 
 2986 enlighte  20   0  584m  42m  26m S  14.3  0.5   0:44.27 yakuake                                                                                 
 1305 root      20   0  448m  68m  39m S   5.0  0.9   3:33.98 Xorg                                                                                    
 7701 enlighte  20   0  424m  17m  10m S   4.0  0.2   0:00.12 kio_thumbnail

Check the KiB Mem and KiB Swap lines on the header. They indicate total, used and free amounts of the memory. The buffer and cache information is present here too, like the free command.

5. htop

Similar to the top command, the htop command also shows memory usage along with various other details.

htop memory ram usage

The header on top shows cpu usage along with RAM and swap usage with the corresponding figures.

Reverse Proxy (Reverse Proxy vs Forward Proxy)


In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or moreservers. These resources are then returned to the client as though they originated from the server itself (or servers themselves).While a forward proxy acts as an intermediary for its (usually nearby) associated clients and returns to them resources accessible on the Internet, a reverse proxy acts as an intermediary for its (usually nearby) associated servers and only returns resources provided by those associated servers.



A proxy server connecting the Internet to an internal network.

A forward proxy taking requests from an internal network and forwarding them to the Internet.

Diagram of proxy server connected to the Internet.
An open proxy forwarding requests from and to anywhere on the Internet.
A proxy server connecting the Internet to an internal network.
A reverse proxy taking requests from the Internet and forwarding them to servers in an internal network. Those making requests connect to the proxy and may not be aware of the internal network.
Reverse Proxies broker connections coming from the internet, to your app servers. Forward Proxies filter connections going out to the internet, from clients sitting behind the firewall.
Reverse Proxies take origin connections from the internet and connect them to one server or a server farm, meaning multiple inbound connections from the internet are pooled into one or more connections to the server(s). This is known as TCP Multiplexing, and is often used with Load Balancing techniques to optimize and accelerate application delivery. Reverse Proxies measure load based on the incoming and outgoing connection ratio, the higher the ratio the better the performance
Reverse Proxies are good for:
  • Application Delivery including:
    • Load Balancing (TCP Multiplexing)
    • SSL Offload/Acceleration (SSL Multiplexing)
    • Caching
    • Compression
    • Content Switching/Redirection
    • Application Firewall
    • Server Obfuscation
    • Authentication
    • Single Sign On


  • Reverse proxies can hide the existence and characteristics of an origin server or servers.
  • Application firewall features can protect against common web-based attacks. Without a reverse proxy, removing malware or initiating takedowns, for example, can become difficult.
  • A reverse proxy can distribute the load from incoming requests to several servers, with each server serving its own application area. In the case of reverse proxying in the neighborhood of web servers, the reverse proxy may have to rewrite the URL in each incoming request in order to match the relevant internal location of the requested resource.
  • A reverse proxy can reduce load on its origin servers by caching static content, as well as dynamic content – also known as web acceleration. Proxy caches of this sort can often satisfy a considerable number of website requests, greatly reducing the load on the origin server(s).
  • Reverse proxies can operate whenever multiple web-servers must be accessible via a single public IP address. The web servers listen on different ports in the same machine, with the same local IP address or, possibly, on different machines and different local IP addresses altogether. The reverse proxy analyzes each incoming request and delivers it to the right server within the local area network.

The Forward Proxy

When people talk about a proxy server (often simply known as a “proxy”), more often than not they are referring to a forward proxy. Let me explain what this particular server does.

A forward proxy provides proxy services to a client or a group of clients. Oftentimes, these clients belong to a common internal network like the one shown below.


forward proxy


When one of these clients makes a connection attempt to that file transfer server on the Internet, its requests have to pass through the forward proxy first.

Depending on the forward proxy’s settings, a request can be allowed or denied. If allowed, then the request is forwarded to the firewall and then to the file transfer server. From the point of view of the file transfer server, it is the proxy server that issued the request, not the client. So when the server responds, it addresses its response to the proxy.

But then when the forward proxy receives the response, it recognizes it as a response to the request that went through earlier. And so it in turn sends that response to the client that made the request.

Because proxy servers can keep track of requests, responses, their sources and their destinations, different clients can send out various requests to different servers through the forward proxy and the proxy will intermediate for all of them. Again, some requests will be allowed, while some will be denied.

As you can see, the proxy can serve as a single point of access and control, making it easier for you to enforce security policies. A forward proxy is typically used in tandem with a firewall to enhance an internal network’s security by controlling traffic originating from clients in the internal network that are directed at hosts on the Internet. Thus, from a security standpoint, a forward proxy is primarily aimed at enforcing security on client computers in your internal network.

But then client computers aren’t always the only ones you find in your internal network. Sometimes, you also have servers. And when those servers have to provide services to external clients (e.g. field staff who need to access files from your FTP server), a more appropriate solution would be a reverse proxy.


The Reverse Proxy

As its name implies, a reverse proxy does the exact opposite of what a forward proxy does. While a forward proxy proxies in behalf of clients (or requesting hosts), a reverse proxy proxies in behalf of servers. A reverse proxy accepts requests from external clients on behalf of servers stationed behind it just like what the figure below illustrates.

reverse proxy


To the client in our example, it is the reverse proxy that is providing file transfer services. The client is oblivious to the file transfer servers behind the proxy, which are actually providing those services. In effect, whereas a forward proxy hides the identities of clients, a reverse proxy hides the identities of servers.

An Internet-based attacker would therefore find it considerably more difficult to acquire data found in those file transfer servers than if he wouldn’t have had to deal with a reverse proxy.

Just like forward proxy servers, reverse proxies also provide a single point of access and control. You typically set it up to work alongside one or two firewalls to control traffic and requests directed to your internal servers.

Both types of proxy servers relay requests and responses between source and destination machines. But in the case of reverse proxy servers, client requests that go through them normally originate from the Internet, while, in the case of forward proxies, client requests normally come from the internal network behind them.


What is a Proxy?

Web Proxy?

A proxy server is computer that functions as an intermediary between a web browser (such as Internet Explorer) and the Internet. Proxy servers help improve web performance by storing a copy of frequently used webpages. When a browser requests a webpage stored in the proxy server’s collection (its cache), it is provided by the proxy server, which is faster than going to the web. Proxy servers also help improve security by filtering out some web content and malicious software.

Proxy servers are used mostly by networks in organizations and companies. Typically, people connecting to the Internet from home will not use a proxy server.

1. Obscure Client IP
2. Block Malicious Traffic
3. Block Sites (whitelists/blacklists)(categories of sites)
4. Log activity (user activity reports)
5. Improve Performance (caching the pages)
:::Types of Proxies:::
1. Forward Proxies
2. Open Proxies
3. Reverse Proxies
In an enterprise that uses the Internet, a proxy server is a server that acts as an intermediary between a workstation user and the Internet so that the enterprise can ensure security, administrative control, and caching service. A proxy server is associated with or part of a gateway server that separates the enterprise network from the outside network and a firewall server that protects the enterprise network from outside intrusion.

A proxy server receives a request for an Internet service (such as a Web page request) from a user. If it passes filtering requirements, the proxy server, assuming it is also a cache server , looks in its local cache of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user.

To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a configuration option to the browser or other protocol program.)

An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are frequently requested, these are likely to be in the proxy’s cache, which will improve user response time. In fact, there are special servers called cache servers. A proxy can also do logging.

The functions of proxy, firewall, and caching can be in separate server programs or combined in a single package. Different server programs can be in different computers. For example, a proxy server may in the same machine with a firewall server or it may be on a separate server and forward requests through the firewall.
:::Transparent versus non-transparent proxying:::

Smoothwall web proxy service can be configured to operate in either transparent or non-transparent mode – but what are the differences, and how should you choose between them?
In transparent mode, there are no special configuration steps needed to setup client browsers, thus allowing the proxy service to be activated and in-use almost immediately. Once activated, all traffic destined for the Internet arriving on port 80 is automatically redirected through the proxy. With the latest Guardian products you can even use NTLM with Active Directory in conjunction with transparent proxying allowing for single sign on and minimal network configuration.
Both transparent and non-transparent proxying can be used together at the same time. Enabling transparent does not stop non-transparent from working. In situations where transparent is the norm but a specific application requires non-transparent you can simply configure the proxy settings in that application.
Both modes have pros and cons – if you would like to use transparent proxying please contact support for a discussion on the issues your network may experience when using this method.

Why use non-transparent proxying?

The main reason to use a non-transparent proxy is so that the web browser and other client applications know that a proxy is being used, and so can act accordingly. Initial configuration of a non-transparent proxy might be trickier, but ultimately provides a much more powerful and flexible proxying service. Another advantage of non-transparent proxying is that spyware and worms that use the web for transmission may not be able to function because they don’t know the proxy settings. This can reduce the spread of malicious software and prevent bandwidth from being wasted by infected systems.

Configuring proxy settings in non-transparent mode

When using non-transparent proxying, appropriate proxy settings must be configured on client machines and browsers. This can be achieved in a number of different ways:

Manually – Proxy settings can be entered manually in most web browsers and web-enabled applications. Usually such settings are entered as part of the applications Connection Settings or similar. The address of the proxy is required, along with the proxy port number. These settings are displayed on the “Services / web proxy” and “Guardian / web proxy” pages as part of the “Automatic configuration scrip”” region.

Automatic configuration script – The Smoothwall proxy provides a proxy.pac file that can be used to automatically configure proxy settings in most Internet browsers. To use the automatic configuration script, enter the URL displayed in the “Automatic configuration script” region of the “Services / web proxy” and “Guardian / web proxy” pages into your browser software.

Microsoft Windows 2000 domain – In a Windows 2000+ domain, proxy settings can be configured in the domain security policy. This eliminates the need to manually configure any part of the users system.

Automatic discovery – Many browsers support automatic discovery of proxy settings using the WPAD (Web Proxy Auto-Discovery) protocol. This is relatively easy to configure if you have a local DNS server. Using DHCP to distrubute proxy settings – DHCP can also be used to set proxy settings. That might be a better method than using security policies. Currently the DHCP server on the Smoothwall firewalls cannot be used for giving out proxy.pac locations.

Microsoft Windows login script – The Windows login script can be used to import a registry file which will automatically configure the system wide proxy settings.

.ini files – Browsers like Firefox can be configured automatically with ini files. Such files could be copied or modified as part of the login script on a Microsoft Windows or Linux network.

Third party solutions – Third party applications are available for Windows which can, at login, automatically configure web browser proxy settings. These range from simple programs designed specifically to automate proxy configuration, or more sophisticated applications that provide a range of services such as monitoring the users desktop.

When to use transparent proxying

When minimal or no network configuration is required. Transparent proxying can be useful in mixed environments containing Unix, Linux, Apple Mac and Microsoft Windows systems. This allows quick access to the web proxy for everyone, without having to configure a multitude of different platform specific applications and browsers. If transparent proxy is required, please have a talk with Smoothwall support before you decide on the implementation as there are a lot of caveats using this method.
How to Setup a Proxy

Most internet browsers can be setup to run through proxies in just a matter of minutes.

Internet Explorer Proxy Settings
Click Tools
Click Internet Options
Click the Connections Tab
Click LAN settings
Check the “Use a proxy server for your LAN” box
Enter the IP Address of the Proxy Server and the Port Number
Click OK
Go to to check for proxy

FireFox Proxy Settings
Click the FireFox Button(The button in the upper left corner)
Click Options
Click Options in the new tab
Click the Advanced Tab
Click Settings
Click Manual Proxy Settings
In the HTTP Proxy Box enter the IP Address of the proxy server and the Port number
Click OK
Go to to check for proxy

Google Chrome Proxy Settings
Click the Customize and Control Button(Button with the wrench picture in upper right corner
Click Under the Hood
Click Change proxy settings
Click LAN Settings
Check the “Use a proxy server for your LAN” box
Enter the IP Address of the Proxy Server and the Port Number
Click OK
Go to to check for proxy

Safari Proxy Settings
Click Safari
Click Preferences
Click Advanced
Click Change Settings
Check the Web Proxy(HTTP) box
Enter the IP Address of the Proxy Server and the Port Number
Click Apply Now
Go to to check for proxy

Port Forwarding

::::Port forwarding::::

Port forwarding or port mapping is a name given to the combined technique of

1.translating the address or port number of a packet to a new destination
2.possibly accepting such packet(s) in a packet filter (firewall)
3.forwarding the packet according to the routing table.

The destination may be a predetermined network port (assuming protocols like TCP and UDP, though the process is not limited to these) on a host within a NAT-masqueraded, typically private network, based on the port number on which it was received at the gateway from the originating host.

The technique is used to permit communications by external hosts with services provided within a private local area network

Port forwarding allows remote computers (for example, computers on the Internet) to connect to a specific computer or service within a private local-area network (LAN).

In a typical residential network, nodes obtain Internet access through a DSL or cable modem connected to a router or network address translator (NAT/NAPT). Hosts on the private network are connected to an Ethernet switch or communicate via a wireless LAN. The NAT device’s external interface is configured with a public IP address. The computers behind the router, on the other hand, are invisible to hosts on the Internet as they each communicate only with a private IP address.

When configuring port forwarding, the network administrator sets aside one port number on the gateway for the exclusive use of communicating with a service in the private network, located on a specific host. External hosts must know this port number and the address of the gateway to communicate with the network-internal service. Often, the port numbers of well-known Internet services, such as port number 80 for web services (HTTP), are used in port forwarding, so that common Internet services may be implemented on hosts within private networks.

Typical applications include the following:

Running a public HTTP server within a private LAN
Permitting Secure Shell access to a host on the private LAN from the Internet
Permitting FTP access to a host on a private LAN from the Internet

Administrators configure port forwarding in the gateway’s operating system. In Linux kernels, this is achieved by packet filter rules in the iptables or netfilter kernel components. BSD and Mac OS X operating systems implement it in the Ipfirewall (ipfw) module.

When a port forward is implemented by a proxy process , then no packets are actually translated, only data is proxied. This usually results in the source address (and port number) being changed to that of the proxy machine.

Port forwarding opens certain ports on your home or small business network, usually blocked from access by your router, to the Internet. Opening specific ports can allow games, servers, BitTorrent clients, and other applications to work through the usual security of your router that otherwise does not permit connections to these ports.


If you are running servers inside your network, which are going to be accessed from the outside world, you have to use port forwarding on your router.
Within the router it forwards certain ports to specific servers.
If you have a webserver and you want to access it from outside the local network, you will have to port forward port 80.
e.g: email server, web server, ftp server etc.

you can only port forward a single port for a single public IP to the single IP to that port inside the network.
e.g: port forwarded to, but if you are using another web server then you will have to you another port

webserver(<->switch<->(<->internet(someone accessing

e.g. SMB router
sometimes the common services are already mentioned and you simply have to mention the destination IP (e.g. FTP 21->21 to IP


Smoothwall Proxy settings, applications and mobile devices

(client sends the request to the web proxy. web proxy retrieves the page on the client’s behalf and the sends it back to the client)
(proxies are used to handle web traffic, but other services can also be handled e.g. DNS proxy)
(Squid web proxy is used on the smoothwall as a proxy engine)(but smoothwall has created a web interface for setting up proxies)
:Two types of proxies configurations:
(run on a specific port)(browsers are applications are told where the proxy is in order to use it)
(works by intercepting web traffic and routing it through the web proxy)
(in order for this to work the traffic need to be physically passed through the interface on the SWG)
(using SWG as the default gateway and using bridged interfaces to achieve this)

:WCCP(Web Content Caching Protocol):(a cisco feature)
(cisco routers and switches can be configured to intercept web traffic and forward it to the web proxy)
(SWG also supports WCCP)

(smoothwall recommends non-transparent proxy as transparent proxy can cause some issues)
(client need to know where the proxy is and what the port number is)

:Dashboard->web filter->Statistics: (shows the web filter health and status of the system)
Uptime: 0d 9h 39m
Web requests: 19
Average request rate: 0.0/min
Median service time (last 5 minutes): 0.00000s
Requests blocked (last 24 hours): 0.0%

1.Proxy authentication standard gives a pop-up dialog for the user to enter the u/p, which is not recommended.
2.pass through methods such as kerberos and NTLM are recommended. u/p are logged on and verified automatically.
(but some application do not support this)(especially non-web applications)(very common issues from the customers)
(browsers, applications and OS need to support these methods)
(if applications do not support them, it tends to be difficult to tshoot them, as no errors are shown)

1.SWG works with all web browsers:
(they all have proxy settings)

(there are numerous applications that use web ports and protocols)
1.Google Drive
3.Google Earth
(applications that do not support authentication or web proxy in general is difficult)
(all applications have different types of behaviour)
(one method of fixing application proxy authentication problem is to bypass authentication)
(we need to know what domains and IP addresses the application talks to)
(Web proxy » Authentication » Exceptions)(for adding the category groups and/or swurl lists)
(e.g: application like dropbox which talks to only one domain is easy)
(but if an application that talks to dynamic list of IPs such as skype it is difficult) (so by passing an authentication for a destination is not an option)
1.(by passing a web filter requires another proxy to be setup and the application told to use this proxy instead of the proxy that requires authentication)
(e.g: Non-transparent proxy / Test location : no authentication)

2.(we can bypass the proxy completely)

3.(we can use another authentication method other than pass through)
(e.g: use SSL login authentication method)(solves any or all authentication issues with applications)

:Mobile Devices:
1.Tablets (iPad and Android tablets)
2.Phones (iPhones, Windows Phones and Android Phones)
(non of the mobile devices support pass through authentication support and proxy support is hit and mis)
(some OSes like iOS has fairly good support for proxies, but this doesn’t mean that the applications running on these platforms use those settings)
1.For mobile devices i.e. Wifi or BYOD, there are only 2 viable options:
1.SSL login method
2.802.1x Enterprise method (rely on the DHCP server on UTM)
3.Global proxy settings(only on iOS7)(using smoothwall connect client)(also available for windows OS)

(handling https traffic can be daunting too)
(smoothwall has features such as decrypt and inspect and validate certificates)
(SWG can even transparently proxy https traffic)

:::::::Proxy settings and applications::::::::
(you can have any number of proxies using any number of authentication methods)
(you can have multiple authentication methods on the same proxy based on the location the client is coming from)
(most issues are usually related to the proxy authentication)
Web proxy » Authentication » Policy wizard:
:Proxy Authentication Methods:
1.Pass through methods:
(not all applications or OS supports them)
(one method for applications which do not support them is to by pass the application from proxy authentication)
(Web proxy » Authentication » Exceptions: in exceptions menu we can add categories which do not require authentication)
(but the same categories also need to be allowed in the everyone group in the web filter policies)

2.Redirect users to SSL login page (with background tab)(with session cookie):
(user can login to SSL login page before getting the web access)(
(it requires the users to be logged in first before the application can get access to the web)
(also for the wifi connections before accessing the web)

3.Identification by location:
(place users in a specific IP based group and give access based on this group)
(location to users or user groups mapping is done in the ident by location section:Web proxy » Authentication » Ident by location)

4.NTLM and Kerberos (via redirect methods are used by transparent proxies)
(when a new user connects it asks the user to credentials, before letting the user proceed)

(two non-transparent proxies)
Non-transparent proxy with 3 locations and authentication methods:
1.server location : identification by location
2.staff PCs : redirect users to SSL login page (with session cookie)(staff uses many applications that use proxy)
3.Everywhere: NTLM authentication
Non-transparent proxy with 1 locaiton and authentication method
1.Test location : no authentication
(used to tshoot the applications)
(1 transparent proxy) (intercepts all the traffic on the interface on which the proxy is configured on)
(also intercepts the https traffic)(application need to be complient with SNI for https inspection)
1.Everywhere : no authentication
Filter HTTPS traffic: ticked
Allow HTTPS traffic with no SNI header for the ‘Transparent HTTPS incompatible sites’ category: ticked
(non-SNI supported sites will not be filters, only SNI supported sites will be filtered)

5.Client Proxcy settings:
1.automatically detect settings:
There are two ways the automatic proxy settings can be configured:
1.DNS server (used by IE and all other browsers)(adding a wpad hostname to the dns as an alias that points to the server that is hosting the proxy script)
(browser set as automatically detect settings will ask for wpad.dat if the wpad hostname is resolved)(which is the same as the proxy.pac file) (knowledgebase)
2.DHCP server (used by IE)(option 252)(option 252 is already configured on smoothwall if used as a DHCP server, but on MS it need to be configured over various scopes) (knowledgebase)
2.use automatic configuration script: address:
3.manual settings:
proxy server:
bypass proxy server for local addresses
(proxy is not used if only hostname is used, but will be used domain name is used)
(e.g: http://intranet proxy will not be used)(http://intranet.mydomain.local proxy will be used)
(when an application is using a proxy the client will not do DNS lookup, it sends the request to the proxy and the proxy does the DNS lookup on client’s behalf)
(if client or application is not using a proxy or is behind the transparent proxy then it will definitely do DNS lookup and then send requests out to IP address)
(good for tshooting)

(mobile devices usually do not support automatic proxy settings)
(both android and iOS has proxy settings available in the wifi settings section)
(iOS supports proxy.pac files and is recommended to be used)
(android does not support proxy.pac file)(settings need to be manually defined)

(smoothwall can auto generate proxy.pac and wpad.dat files)(
(these files can be customised in the web proxy->automatic configurations section)(exceptions can be added and also regular expressions can be used)

(some non-web applications can use system proxy settings and some don’t have settings at all)
(commonly they don’t support any authentication methods other than the basic proxy authentication)
(thsoot is is difficult as no messages are generated)(use the transparent test proxy)

(when using a transparent proxy application may have some issues with https)(recommended to use non-transparent proxy)