WEAKNESS OF AASHTO ASPHALT MIX DESIGN

 

Prior to SHRP the mix designs in use were the Marshall and Hveem procedures. They were developed my user agencies and performed well for many decades. The Marshall design is still being used. The SHRP mix design was developed by academics who would not have had the field experience that state agencies would have had. The universities have provided many great advances in paving; however they do not have the experience of personnel with years of experience in road building. However they often have the power to place academic theories into practice. Following are certain problems with the specifications.


Incorrect Use of Maximum Density Line.
 The maximum density line shown in the specifications is based on the maximum aggregate size rather than the nominal size (screen size that first retains aggregate.). The aggregate retained between the maximum size and the nominal size would act in conjunction with that of the material between the nominal size and the next screen size smaller as there is not enough material to interlock. The actual maximum density line that pertains to the mix design is from the nominal screen size to zero. (Using the 0.45 power of the sieve size on the x axis. Note, Rudy Jiménez at The University of Arizona, believed that it should be the 0.50 power; that is, the square root, and he was probably correct.) To properly make judgments about the gradation of the mix, one needs to have the maximum density line that corresponds to the actual aggregate to be used. I was taught this by Vaughn Marker when he was Asphalt Institute Engineer in California. Properly used, it can stop mix problems, such as tender mixes and rutting, from happening.

Forbidden Zone of the Gradation.  This was placed in the specification by academics using the maximum density line from the maximum size gradation not the nominal size gradation. Also it had no value with respect to quality .

Specifications Allow Over-Sanded Mixes. All mix designs allow gradations that will cause tenderness and accelerate rutting. If the proper maximum density line is used, such mixes are readily detected, however that is not so with the worthless maximum density line in the present design procedure. Rutting is highly dependent upon where the VMA in a mix comes from also, which I will discuss in a future blog.

Asphalt Grading Specifications

 The grading specification should be on the RTFO residue as that is what is in the road. Also, the RTFO test should realistically be such that it approximates the properties of the asphalt in the mix in place. The TRFO was designed to mimic the increase in viscosity of the asphalt that is mixed in a batch plant at 320°F with the oxygen partial pressure the same as air. Things are different in a drum mixer. If the air in the drum mixer is 4 times that needed to burn the fuel, the oxygen partial pressure will be decreased by 25% from the combustion reducing the rate of oxidation. Also if moisture is present, the partial pressure of the oxygen will b further decreased. Also if the mixer runs at a temperature less that 320° F, the rate   of oxidation will be further reduced.

°.

QUALITY CONTROL II

Students “t”

The purpose of this and previous blogs is to show that statistics is basically linear algebra and intrinsically simple. Thus in previous posts I showed that data gathered can be simply expressed by lines, the lengths of which represent the mean and the standard deviation of the data respectively.  It was also established that the two lines are independent of one another. If we actually knew that they absolutely represented the true values of each, we would be through. However, that is not the case. The mean is always sort of “fuzzy” in that we can’t be sure that what we measured is the true value. Measuring uncertainty is where it gets complicated. There is usually uncertainty with the standard deviation, but not always.  Data sources from facilities that routinely manufacture a product may have sufficient data on the standard deviations to be able to assume their data represents the true value.

True Value of the Standard Deviation is Known.

 In this case the normal distribution is all that is needed to evaluate the uncertainty of the mean.

 

True Value of the Standard Deviation is Not Known

Usually standard deviations are also fuzzy thus both the mean and the standard deviations can be considered to be random variables. While the mean is normally distributed, the square of the standard deviation (variance) is distributed according to the chi squared distribution. (The chi squared distribution with one degree of freedom is the square of the normal distribution.) However, the distribution that we want is that of the mean divided by the standard deviation, both of which being random variable.

Derivation of the t” Distribution

 The complication is what we need now is the distribution of the normal distribution divided by the square root of the chi square distribution. The equation for that is:

 f(z) = Integral |x|f(x)f(zx)dx

where zx is the normal distribution, x is the square root of the chi square distribution and z is the “t”  distribution.

 The calculation of the derivation of the “t” distribution may be found in Statistical Inference, Vijay K. Rohatgi, John Wiley & Sons, 1984.

So we see that while the basic concepts of statistics is simple, the problem of uncertainty is complex.

QUALITY CONTROL

Reliability of Data

In a previous entry I showed that the basic concepts of quality control, which depends upon the laws of probability (statistics), are surprisingly simple. All that we are trying to do is measure lengths of lines. The equations used to calculate the mean and standard deviation are those that describe only two lines so that no matter how many samples are tested, the calculations of those parameters result in just those two lines which are independent of each other. While “n” data points occupy “n” dimensions, the mean and standard deviation occupy only two. We can use the standard deviation as the ruler to measure the lengths of interest.

What makes things difficult is the fuzziness of those lines. In quality control the first thing we want to determine is the length of the distance from the measured length (sample mean) to some desired length. To do that we use a ruler in which the standard deviation is set to be one. For convenience, and because the standard deviation is defined as the second moment around the mean, the targeted mean is subtracted from the data points so that the resulting length of the data vector is reduced to the difference between the sample mean and the target. That length is then divided by the standard deviation. The resulting length is then measured not in inches or millimeters but rather in units of the standard deviation ruler. As an example, assume that 100 was the target value, the measured mean was 85 and the standard deviation was 10. We are not interested in what the actual measured mean is, but rather how close it is to the target, based upon the standard deviation ruler:

1. (100-85)/10 = a distance of 1.5 SD units. In some cases the measurement is not from the desired target, but to upper and lower limits.

However, the mean value is fuzzy and the standard deviation may or may not be fuzzy. The data generated in calculating the mean make up a random variable (X= (x1, x2, —, xn)) in vector space. How fuzzy it is depends upon the length of the SD, and the type of distribution. While there are many distributions, if the SD is not fuzzy, what is called the normal distribution is often used. Because of the uncertainty in the mean, the distribution function tells us the chances of the mean actually being somewhere else.  In example 1 with only the mean being fuzzy, and using the normal distribution, we can say that there is a 6.68% chance that the true mean of the data is the desired mean.

Unfortunately, the SD often is fuzzy too and is thus also a random variable. The square of the SD is called the variance, and has its own distribution function called the chi squared distribution. While the normal distribution is independent of the number of data points defining the random variable, the form of the chi squared distribution depends upon the degrees of freedom. The chi square distribution with one degree of freedom is the square of the normal distribution. That distribution may be used to determine whether two measured standard deviations are really the same.

How the fuzziness or uncertainty is handled will be covered later. Although the mathematics gets more complex, especially when multivariate sets of data must be considered, the goal is still to simply measure lengths with a specific ruler.

 

ASPHALT QUALITY CONTROL

Means and Standard Deviations as Lengths

When we talk about quality control we hear about distributions, such as the poisson, hypergeometric, binomial, normal, “t”, chi-squared and “F”. How complicated! And we are told to worry about things being independent, are inundated with words like variance, mean, median, mode, standard deviation, whether the standard deviation is homo or hetroscedastic (whether the standard deviation is constant or not), confidence limits, and such things as Type I error, Type II error, null hypothesis etc. It cannot be denied that all of these have their place. However, to get to the basics, all we are really trying to do is measure lengths. Statistics is really simply analytical geometry or linear algebra, depending on one’s outlook. Let’s look at the mean and standard deviation.

Mean (one type of average). We are told that it is the first moment around the origin.

Mathematically it is the integral of xf(x)dx between some limits where f(x) is some distribution  function. Yet it is still length.

Consider a set of “n” data points, X= (x1, x2, —, xn). Then visualize a graph of n dimensions with a single location, X, representing those data. Also visualize a line in that n dimensional space that is equidistant from each axis, i.e. It goes through (1,1,—–,1) etc. Drop a line perpendicular from X to that equidistant line. Call that point M=(µ, µ,—-, µ).  Divide every point by the square root of n, the number of data points to introduce the number of tests into our considerations.

The line (δ ) from the X to M would be the vector (x1– µ, x2– µ, —, xn– µ) while the line (µ) from the origin to M would be the vector (µ, µ,—-, µ). Since the two lines are perpendicular, their scalar (or inner or dot) product would be zero:

((µ, µ,—-, µ))·((x1– µ, x2– µ,—, xn– µ)/ )= 0

x1, + x2, +—-,+ xn – nµ = 0

µ= (x1, + x2, +—-, + xn)/n, which is identical to the form for the mean.

That is, the length of the line µ from the origin to M is equal in value to the mean of the data points.

Standard Deviation. The length of the line, δ, from X to M is the square root of (1/n)*((x1)2+ (x2)2+—-,+ (xn)2 – nµ2). (1/n)*(x1)2+ (x2)2+—-,+ (xn)2 is the square of the length of the line from the origin to the data, X,  while (1/n)*(nµ2) is the square of the length from the origin to the point of M.

δ = ((1/n)*((x1)2+ (x2)2+—-, + (xn)2 -nµ2))0.5

Thus the equation of the length of the line δ is identically to one of the equations used for calculating standard deviations (where the standard deviation is not a random variable. If the sample standard deviation (s) is a random variable, 1/n would be replaced with 1/(n-1)).

Rulers. To measure lengths we need a ruler. We use miles in the United States, in Canada they use kilometers while in Russia, the Verst may be used. In statistics the ruler used is the length, “δ”, if the standard deviation is known or, “s” if the standard deviation is a random variable.

The many terms mentioned above and the sophistication of the mathematics are important in establishing the reliability of the data, still, basically we are only measuring lengths.

UNRELIABILITY OF PG GRADING SYSTEM

Superiority of the AR Grading System

 

AR Grading. The Asphalt Residue (AR) grading system used in the Western part of the United States for decades grew out of the fact that the asphalts in this area differed greatly. While various grades were in use, the workhorse grade was AR 4000 which meant that the asphalt in the pavement, irrespective of crude source, would have the same consistency. AR 4000 meant that the viscosity at 60° C of the asphalt after the RTFO test would be 3000 (2500 in Washington) to 5000 poises. A viscosity of 4000 poise was selected as it was found that at 4000 poises tenderness in oversanded mixes was easier to handle.  60° C is used as in most cases that is about the highest temperature the pavement reaches although in the deserts it can reach considerably higher temperatures. On the other hand, the viscosity at 60° C from the RTFO of equivalent asphalts graded by the AC grading system (2000 ± 400 poises based on original viscosity at 60° F) or by penetration grading system (85/100 based on penetration at 25° C) can vary greatly. For the 85/100 penetration grade, the range of the 60° C viscosity after the RTFO of those asphalts evaluated during the development of the AR grading system varied from about 1600 to over 7000 poises. For an AC 2000 grade asphalt, the probable viscosity after the RTFO aging would range over about 4000-8000 poises, depending on the crude source. The equivalent PG grade is PG 64-XX.

PG Grading. There is an astounding number of PG grades, 7, and up to 6 subgrades within each grade, based upon low temperature properties. If there was consistency within the grades it might make sense, but we have regressed even back beyond the AC grading system. These grades were set up primarily to control tenderness and rutting even while leaving the gradation specification so open that gradations that would allow grievous rutting are included. The equivalent PG grade is based upon the Dynamic Shear test of G*/sinδ of 1.00 kPa at 64° C with no maximum. For a sinδ of 1.00 (close to that of unmodified asphalt) the viscosity is G*· sinδ or 1000 poises. The G*/sinδ value from the RTFO test would be 2.20 kPa min or 2200 poises with sinδ = 1.00 and again there is no maximum. Sinδ for modified asphalts is less than one thus that drops the specification minimum viscosity below that of non-modified asphalt.` In other words, for the asphalt as placed in the pavement, the AR 4000 specification is 3000-5000 poises at 60° C. For the PG 64-XX , the-in place viscosity at 64° C can vary from somewhat less than 2200 poises to as high as one wishes.

 

 

Philosophical Inconsistency of the PG Grading System. I am only addressing the grading system, not the value of the low temperature specification. I am not suggesting that there is anything wrong with the use of the DSR, as it is a handy tool. I am suggesting that the grading should have been based upon the consistency of the RTFO residue whether viscosity tubes are used or the DSR. The value of the DSR data is that we can get information about the effect of polymer modification from the phase angle, sigma (δ).

We have shown above that the range of the allowed viscosity from the RTFO test of any particular PG grade is greater than that of any previous grading system even though there is are 7 specific grades in order to control rutting. The implication is that controlling rutting requires fine tuning. Yet, at the same time there is a movement to use warm mixes, one of the benefits of which is that the asphalt will have a considerably lower viscosity than the intention of the grade.

Controlling Rutting. The prime control of tenderness and rutting should be with aggregate gradation.  As long as the gradation specification allows badly oversanded mixes, rutting will be a problem.

Robert L. Dunning, chemistdunning@gmail.com, www.petroleumsciences.com

REDUCING HOT MIXED ASPHALT COSTS

Controlling Voids in Mineral Aggregate (VMA)

Considerable effort is being made to reduce costs and amount of hydrocarbons that go into hot mixed asphalt (HMA) pavements. One such effort is to find ways to mix and compact at a lower temperature thus reducing the amount of fuel required. However, saving fuel can also be obtained by reducing the amount of asphalt used as asphalt can also be sold as a component of heavy fuel oil or cracked to make diesel, gasoline etc.

Mix Design.

Irrespective of the type of mix design or the amount of modification of the asphalt, the basic properties for an acceptable product remains the same. If we get down to basics, we want the gradation to be such that it inhibits rutting, want the gradation in the # 30 sieve size to be such that there isn’t a lack of material in that area and want the composition of the binder to be such that the film thickness is somewhere between 7 and 10 microns (based upon our experience. Idaho specifies 6 microns as a minimum) and, for example for a ½” nominal design, an effective asphalt content of 4-5%.

Trade off between % Asphalt and VMA. As the VMA increases, the % asphalt  required increases at a rate of about 0.25% per each percent of increased VMA, the exact amount depending on the actual specific gravities of the aggregate and asphalt. For a 400 ton an hour plant, the reduction of the VMA of 1% would reduce the asphalt by one ton per hour or a savings of $500/hour if asphalt is $500/ton.

Silliness of the “Forbidden Zone”. Some Superpave gradation specifications have a “forbidden zone” for the gradation through which the gradation must not go. It is supposed to be on the maximum density line (on the 0.45 power gradation curve) of the aggregate; however, in addition to being silly, it doesn’t even fall on the actual maximum density curve for the job mix formula.

Effect of RAP on VMA. With the introduction of SUPERPAVE the VMA, which used to be 13% if one was used, was increased to 14%. We were having problems in being able to make the 14% with granite aggregate, and found that we had to control this by blowing out -#200 material. On one project I used a factorial experimental design to aid in adjusting the gradation with considerable success. This allows evaluating the effect of numerous variables on mix properties. Of course saving money by reducing the VMA was not an option. With the introduction of RAP, however, the VMAs rose by as much as 2%, requiring as much as 0.5% more total asphalt (including that in the RAP).

Reducing VMA to Reduce Cost

A number of years ago I did a Gram-Schmidt orthogonalization on gradation data. I found that there were only three truly independent variables, one of which was the % -#200 material. By using three independent aggregate criteria and % asphalt as a fourth variable we should be able to determine what changes should be made in the mix to minimize the VMA within the specification criteria, thus minimizing cost. I would suggest the use of a 24 factorial design with triplicate centerpoint to find the most economical gradation. The following would be for a ½” nominal mix design. For variables I would use: 1) the % of the gradation between the ½” and the #4 screens; 2) the % of the gradation between the #4 and #30; 3) the  % -#200; and 4) the % asphalt. We have found that a Hveem compaction at the recommended compaction temperatures for a 75 gyration Superpave design give the same results as the gyratory compaction. We would suggest that this be done, therefore, with the Hveem compactor as it uses only 1/4th as much aggregate and asphalt as does the 6” gyratory design however gyratory compaction could be used. The advantage of the Hveem is we can also get as a bonus the stability. I would stipulate that one of the boundary limits would be that no gradation point should be above a line on the gradation curve (0.45 power graph) from the % passing through the first sieve that retains aggregate (1/2”) to the % passing of the #200 sieve. This would provide the information needed to minimize the VMA within the specification. The results could provide the starting gradation and asphalt needed for a gyratory design.

Decreasing the VMA from 16.5 to 14.5% for 100,000 tons of mix would save $ 250,000 of $500/ ton asphalt.

Petroleum Sciences, Inc. has the equipment and mathematical knowledge (as there is considerable mathematics involved) to provide a service should a contractor wish to reduce costs. We can set up the experiment to be done in the contractors own facility and then evaluate the results or do the complete project in our facilities.

Robert L. Dunning, www.petroleumsciences.com, chemistdunning@gmail.com