Fluent

This tutorial covers how to use fluent from a command line with the end goal of running fluent jobs on the Batch cluster. It covers command line syntax, journal files, and output. See also checking fluent licensing and spanning fluent across nodes for additional information.


Command Line Syntax

To run fluent in a batch environment, fluent must be invoked from a Linux shell environment like bash.

An example fluent shell command:

fluent research -g 3ddp < myproblem_01_input.jou > myproblem_01_output.txt

Syntax argument detail:

research

The "licensing" argument above is unique to the CAEDM environment. The three possible values for this argument are "research", "classwork", or "pace", representing the three possible licenses available for fluent. Note that there are no classwork or PACE parallel licenses for fluent. For our examples here, we will assume the job is for research and will show examples using multiple processes (a parallel job.)

-g 

The -g above tells fluent to run without a gui (graphical user interface)

3ddp

3ddp is used in this example, but 3d, 2d, etc may be used depending on the geometry.

< myproblem_01_input.jou 

The "<" operator tells Linux to redirect the contents of the file "myproblem_01_input.jou" as input to the command invoked, which in this case is "fluent". Another example of using the input redirect input operator would be:

mail -s "Important Letter" username@domain.com < letter.txt

Here, the contents of the file letter.txt are sent as input to the program "mail" which will emails the plain-text contents of the file letter.txt to username@domain.com with the subject "Important Letter".

Redirecting the journal file to the binary "fluent" sends all of the customized journal file command code in "myproblem_01_input.jou" to be processed. Journal file syntax and examples are discussed below.

> myproblem_01_output.txt

Similar to the "<" operator discussed above, the ">", or output re-director, tells Linux what to do with the output from a command. In this example, all of the text that would have been displayed to the shell, or command window, will be redirected to the text file "myproblem_01_output.txt". Such output file is for troubleshooting and informational purposes only! Fluent does not use or understand an output file. It may be deleted at any time. The following is an example output file from a fluent run:

Example myproblem_01_output.txt file

   Initiating licensing for research...
   /uxapps/Fluent.Inc/fluent6.3.26/bin/fluent -r6.3.26 3ddp -g
   Loading "/uxapps/Fluent.Inc/fluent6.3.26/lib/fluent.dmp.114-64"
   Done.
   /uxapps/Fluent.Inc/fluent6.3.26/bin/fluent -r6.3.26 3ddp -alnamd64 -path/uxapps/Fluent.Inc -cx ct75c2b02.et.byu.edu
   :34188:39963
   Cleanup script file is /auto/grp1/mygroupspace/cleanup-fluent-ct75c2b02-25309
   >
   Reading "myproblem_01.cas"...
      45000 hexahedral cells, zone  2, binary.
       1800 quadrilateral periodic faces, zone  4, binary.
        750 quadrilateral wall faces, zone  5, binary.
       1500 quadrilateral symmetry faces, zone  6, binary.
       1500 quadrilateral symmetry faces, zone  7, binary.
        125 quadrilateral wall faces, zone  8, binary.
         25 quadrilateral wall faces, zone  9, binary.
        500 quadrilateral wall faces, zone 10, binary.
        100 quadrilateral wall faces, zone 11, binary.
     130950 quadrilateral interior faces, zone 13, binary.
       1800 quadrilateral shadow faces, zone  3, binary.
       1800 shadow face pairs, binary.
      49166 nodes, binary.
      49166 node flags, binary.
   Building...
        grid,
        materials,
        interface,
        domains,
           mixture
        zones,
           default-interior
           wall.2
           wall.3
           wall.4
           wall.5
           symmetry.6
           symmetry.7
           periodic.1
           wall.8
           fluid.9
        shell conduction zones,
   Done.
   Reading "myproblem_01.dat"...
   Done.
   >   iter continuity x-velocity y-velocity z-velocity  monitor-1     time/iter
    202100 3.5973e-19 4.1638e-11 1.6782e-10 4.4353e-07 3.5560e+02  0:00:00 20000     
   ***************************************************************************************
   * hundreds of lines of iteration information removed for ease of reading wiki article *
   ***************************************************************************************
    252100 2.2913e-19 2.5158e-11 1.0139e-10 2.6798e-07 3.9055e+02  0:00:00    0
   > The following files already exist:
        "myproblem_01.cas"
        "myproblem_01.dat"
   OK to overwrite? [no]
   Writing "myproblem_01.cas"...
      45000 hexahedral cells, zone  2, binary.
       1800 quadrilateral periodic faces, zone  4, binary.
        750 quadrilateral wall faces, zone  5, binary.
       1500 quadrilateral symmetry faces, zone  6, binary.
       1500 quadrilateral symmetry faces, zone  7, binary.
        125 quadrilateral wall faces, zone  8, binary.
         25 quadrilateral wall faces, zone  9, binary.
        500 quadrilateral wall faces, zone 10, binary.
        100 quadrilateral wall faces, zone 11, binary.
     130950 quadrilateral interior faces, zone 13, binary.
       1800 quadrilateral shadow faces, zone  3, binary.
       1800 periodic face pairs, binary.
      49166 nodes, binary.
      49166 node flags, binary.
   Done.
   Writing "myproblem_01.dat"...
   Done.

Journal File Syntax and Examples

A fluent journal file executes fluent commands, exactly like issuing commands from the command window within the fluent gui. In this case, commands will be run without a gui, with the output saved to files. Before writing and executing a journal file entirely from the command line, it is good practice to become familiar with the fluent text user interface command window within the gui first. For help with text user interface syntax, reference this tutorial: http://www.cfd-online.com/Wiki/Fluent_FAQ. Once comfortable with text user interface syntax, write a list of commands in a plain-text file and save it in the same folder with the associated .cas file(s).

Example journal file (myproblem_01_input.jou)

   file read-case-data myproblem_01.cas
   solve iterate 1000
   file write-case-data myproblem_01.cas
   yes
   solve iterate 1000
   file write-case-data myproblem_01.cas
   yes
   solve iterate 1000
   file write-case-data myproblem_01.cas
   yes
   solve iterate 1000
   file write-case-data myproblem_01.cas
   yes
   exit

This journal file tells fluent to open a case file, iterate 1000 times and save the work. This is repeated several times. It is good practice to iterate, save, iterate save. If fluent is told to iterate a million times, but the power goes out after 999,999 iterations, all of the computed results are lost if they are not saved! It makes sense to save every hour or so, avoiding losing results, but saving too often (like every second) would cause fluent to run slowly constantly writing to the disk. (A home or group space on the network.)

Running fluent from a shell for the first time

To get started:

1. Test a set of text user interface commands in the command window within the fluent gui
2. Place the desired commands in a plain-text .jou file. Keep the iteration count low. This initial journal file is for quick testing purposes; to make sure the syntax is correct, and fluent can find all the necessary files.

Example test journal file (myproblem_01_input.jou)

   file read-case-data myproblem_01.cas
   solve iterate 10
   file write-case-data myproblem_01.cas
   yes
   exit

3. For convenience, place the journal file, case files, and data files all in the same directory. Otherwise, specify the full path to each file from within the journal file and from the command line. In the case of everything in the same directory, change to that directory using "cd" before invoking fluent.

4. Start fluent from any CAEDM Linux machine with syntax such as:

fluent research -g 3ddp < myproblem_01.jou > myproblem_01_output.txt

If everything above was done correctly, fluent should be iterating. Use "top" or "ps aux | grep fluent" to verify the fluent process is running. Depending on how many iterations were specified, and how long each iteration takes, the fluent process should be around for a few seconds to a few minutes. When it finishes (the processes no longer show up on top or ps) there should be output in myproblem_01_output.txt, and new data to view in fluent in myproblem.01.dat. If all of this has worked, increase the number of iterations in the journal file and learn to submit the fluent command line to the batch queue.

  • qsub submits jobs to the queue
  • qdel deletes a job from the queue
  • qstat gives the status of the queue
  • qstat -n gives the status of the queue, and displays what jobs are running on which machines (nodes)

These commands are available from any CAEDM Linux machine.

To submit a job, you must create a .pbs file and submit it using qsub. Here is an example .pbs file:

Example submit_myproblem_01.pbs

#!/bin/sh
#PBS -l nodes=1:ppn=1
#PBS -l walltime=96:00:00,cput=96:00:00
#PBS -l mem=4gb
#PBS -N my_username_test_job
#PBS -V
sleep 400

Allow me to explain each part of this file:

#!/bin/sh

Lets the system know that this script should be interpreted by sh.

#PBS -l nodes=1:ppn=1

Specifies how many nodes to reserve and how many processors to reserve on each node.

#PBS -l walltime=96:00:00,cput=96:00:00

How long a job is going to take, with wall clock time and cpu time. format is HH:MM:SS

#PBS -l mem=4gb

How much RAM to reserve

#PBS -N my_username_test_job

The name of the job you're running.

#PBS -V

Makes the environment variables from the system where the job was submitted available to the job

sleep 400

This is the command that you wanted run on the remote machine. In this case, you instructed the remote batch node to run the sleep command, which just tells the computer to do nothing, for 400 seconds. This is a great command to check and see if the batch system is working properly.

We are now going to try submitting a sleep job to the batch queue, just to get the hang of it. To do this, create a text file, paste the above contents of submit_myproblem_01.pbs into it, save it as test.pbs, and then issue qsub test.pbs

It should output something like : 657.torque.et.byu.edu. This tells you that your job has been submitted and that it has a job id of 657. You can check on your job by using

qstat -n

You should see output like:

Job id         Name                 User            Time Use S Queue
-------------- -------------------- --------------- -------- - -----
1816.torque    fluent_another_user  another_user    280:17:3 R batch    
1827.torque    my_username_test_job my_username            0 R batch   

Do not be alarmed if you see other users' jobs in the queue. We have allowed everyone to see each others jobs so you can get an idea of what usage there currently is in the cluster. You will be able to look at other users jobs, but will not be able to alter them in any way.

If your job shows up here and is running, you're ready to submit a fluent test job!

Now use your journal file (myproblem_01_input.jou), that has 10 or so iterations in it, and create a fluent batch job that can finish in a few minutes.

To do this, we need to launch fluent using a pbs file just like we used the Batch cluster to schedule the sleep 400 command above. All we need is to build the appropriate .pbs file and submit the job.

The .pbs file will look like this:

example fluent_test_job.pbs

   #!/bin/sh
   #PBS -l nodes=1:ppn=1
   #PBS -l walltime=96:00:00,cput=96:00:00
   #PBS -l mem=4gb
   #PBS -N my_username_fluent_test_job
   #PBS -V
   fluent research -g 3ddp < myproblem_01_input.jou > myproblem_01_output.txt

end of example file

Notice the last line of this file is the same command that you have issued throughout this article on your local machine for testing. All we are doing now is having the Batch cluster decide which node in the cluster will run your command, and have the scheduler run it with the options specified in the .pbs file. We now submit the job using qsub fluent_test_job.pbs, just like we did above when we submitted the sleep job. The difference now, should be we can check our home folder for our myproblem_01_output.txt file and hopefully, updated case and data files as well.

It is a good idea to use completely enumerated paths in your fluent command lines. This way the computer knows exactly where the input and output files are, so there is no confusion. An example would be:


fluent research -g 3ddp < /fse/testuser/hw1/myproblem_01_input.jou > /fse/testuser/hw1/myproblem_01_output.txt

Where testuser is your username, and fs(*) is the filespace that your homespace is located. To find out the complete path to your homespace, from the folder where your fluent files are located, type pwd, and the full path will be printed to the screen. It should be something like /fse/username/a_subfolder for a folder in your homespace, or /grp3/groupname/a_subfolder for a folder in a groupspace.


I mentioned earlier that you needed to use a .pbs file if you needed more than one processor for your job, and I will now explain how to go about this. To tell fluent to use more than one processor, you must use the -t command. For example:

fluent research -g 3ddp -t4 < myproblem_01_input.jou > myproblem_01_output.txt

will launch fluent with 4 processes. -t12 would use 12 processes.

Now you may recall that in the .pbs file there is a line that states:

#PBS -l nodes=1:ppn=1

This line states to use one processor on 1 node. To use 4 processors on 1 node, we would submit the job like:

#PBS -l nodes=1:ppn=4

For clarity, when we pass fluent the -t option, we are stating how many fluent processes to start, when we change the entry in the .pbs file we are changing how many processors to reserve.

For example, if we issue

fluent research -g 3ddp -t12 < myproblem_01_input.jou > myproblem_01_output.txt

but only specify:

#PBS -l nodes=1:ppn=4

Then fluent will reserve one node with 4 processors, and start 12 fluent processes on that node. The job will take FOREVER to finish because you will have a job that was meant to use 100% of 12 processors running on a 4 processor box, and it will overrun the machine and be deathly slow.

If on the other hand we invoke

fluent research -g 3ddp -t3 < myproblem_01_input.jou > myproblem_01_output.txt

But specify

#PBS -l nodes=1:ppn=4

Then we will have 3 processes running on a machine with 4 processors reserved. This will finish quickly, but is a waste of resources because someone else could be using that other processor for something else. Try and match the number of resources used to the number of resources reserved.