International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Optimization of surface roughness in the Roller
burnishing process using response surface
methodology and desirability function
K Saraswathamma, G Venkateswarlu, S Venkatarami Reddy
K Saraswathamma , Department of Mechanical Engineering, University of College of Engineering, Osmania University, Hyderabad, India,E-mail-
[email protected]
G Venkateswarlu , Department of Mechanical Engineering, University of College of Engineering, Osmania University, Hyderabad, India.
S Venkatarami Reddy(PG Student) Department of Mechanical Engineering, University of College of Engineering, Osmania University, Hyderabad, India.
KeyWords
Roller burnishing, speed, feed, interference, surface roughness, Response surface methodology, desirability function.
Abstract
In the present study, an optimization strategy based on desirability function approach (DFA) together with response surface
methodology (RSM) has been used to optimize roller burnishing process of 6063 aluminium alloy. A quadratic regression
model was developed to predict surface roughness using RSM with Box Behnken design. In the development of predictive
models, Burnishing speed, Feed and Interference were considered as model variables. The results indicated that
Interference and burnishing speed were the significant factors on the surface roughness.
1
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
INTRODUCTION
Fine machining process of cylindrical surfaces exposed to the high exploitation loadings has to ensure acceptable surface
quality and it’s as longer functionality. In the last decades, selection of the process to satisfy these demands is
additionally determined with short cycles and low pollution priorities. These production demands mean that machining
processes with low waste and low energy consumption are required while long cycles of fine machining have to be
avoided. Burnishing is the one of the method to achieve fine surface quality.
Burnishing is post machining finishing process. It produces good surface finish, induce residual compressive stresses,
improves micro hardness, corrosion resistance and fatigue life. Burnishing is simple, economical, needs less time and skill
to get high quality surface finish. After primary machining process surface of the work piece consists of peaks and valleys
of irregular heights and spacing. Burnishing creates plastic deformation and displacement of material in the peaks takes
place which cold flows under pressure into the valleys.
The literature survey indicates that earlier investigations concentrated on the effect of the roller burnishing process dealing
mostly with surface roughness with little focus on optimization of the roller burnishing parameters. A. Stoic et. al. [3]
investigated the fine machining efficiency of 34CrMo4 steel with roller burnishing tools.. Malleswara Rao J.N., Chenna Kesava
Reddy A. and Rama Rao P.V. [4] investigated the effect of burnishing force and number of tool passes on surface hardness and
surface roughness of mild steel specimens. S. Thamizhamanii, B. Saparudin, S.Hasan [5] investigated the multi roller burnishing
process on non-ferrous metals namely Aluminium, Brass and Copper to improve surface roughness and surface hardness.
Aysun Sagbas [6] presented an optimization study based DFA together with response surface methodology has been used to
optimize ball burnishing process of 7178Al alloy. Rajasekariah and Vaidyanathan [7] investigated the effect of several
parameters of ball burnishing such as the diameter of the ball, the feed, the burnishing force, the initial surface finish on the
finish, surface hardness and wear resistance of steel components. El-Taweel & El-Axir [8] were utilized Taguchi technique to
identify the effect of burnishing parameters that is burnishing speed, burnishing feed , burnishing force and number of passes
on surface roughness, surface micro hardness, and improvement ratio of surface roughness and improvement ratio of surface
micro hardness. El-Axir, Othman and Abodiena [9] have studied the inner surface finishing of aluminum alloy 2014 by ball
burnishing process using RSM technique El-Khabeery and El-Axir [10] have studied experimental techniques for studying the
effects of milling roller-burnishing parameters on the surface integrity of 6061-T6 Aluminum. El-Tayeb, Low and Brevern [11]
have studied the influence of roller burnishing contact width and burnishing orientation on surface quality and tribological
behavior of Aluminum 6061. Hassan, Al-Jalil and Ebied [12] have studied the burnishing force and number of ball passes for the
optimum surface finish of brass components.
EXPERIMENTAL DETAILS
Roller Burnishing experiments were conducted on AA 6063 work pieces, which is ductile and available commercially in
the form of round bars. First the work piece is held in 4 jaw chuck of lathe and facing operation is completed on both
sides and centre drilling is completed on both the faces. Then, the work piece is held in between centers of lathe and it is
driven through the lathe dog. A HSS of single point cutting tool is fixed in the tool post of lathe and work pieces are
turned to have 6 steps and 5 grooves in between them at a speed of 270 rpm and feed of 0.1 mm/rev. Then surface
roughness is measured and recorded and this is treated as an initial surface roughness. In actual experiments, by
applying different parameters on each step, this long piece can be utilized as 6 different work pieces. In the present
work, roller having outside diameter of 20 mm and width 5 mm is used for roller burnishing.
The process parameters are selected as speed, feed and interference. Design of experiment approach was used to
systematically investigate the influence of process variables on the surface roughness. Here Box Behnken method of
RSM was used to design experiments. Table 1 lists the coded levels actual levels of different parameters used in
burnishing of Al alloy and the summary of responses is given in Table 2.
2
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Table 1: Process parameters and their levels
Process Units Level (-1) Levels Level ( +1 )
Parameters 95 Level( 0 ) 190
Speed RPM 0.11 0.32
Feed mm/rev 2 135 6
Interference 0.22
mm
4
Table 2: Summary of responses
Run Speed Feed Interference SR before SR after
order (rpm) (mm/rev) (µm) Burnishing Burnishing
Ra (µm)
1 135 0.32 2 Ra (µm)
2 135 0.22 4 0.74 0.66
3 135 0.22 4 0.74 0.62
4 135 0.22 4 0.74 0.63
5 95 0.22 2 0.76 0.62
6 190 0.22 2 0.74 0.6
7 190 0.22 6 0.74 0.7
8 95 0.11 4 0.74 0.45
9 135 0.32 6 0.76 0.56
10 190 0.32 4 0.76 0.45
11 190 0.11 4
12 135 0.22 4 0.76 0.5
13 135 0.11 6 0.74 0.48
14 95 0.32 4 0.76 0.67
15 135 0.11 2 0.74 0.6
16 135 0.22 4 0.74 0.55
17 95 0.22 6 0.74 0.53
0.74 0.61
0.76 0.59
1. RESULTS AND DISCUSSIONS
To know the significance of the regression equation in explaining the relationship between surface roughness behavior and
controllable process parameters, F- test from the analysis of variance (ANOVA) is conducted. The contribution of each term of
the model in affecting percent improvement in response variable is found out through the sum of square method.
Sequential Model sum of Squares were calculated to select the highest order polynomial where the additional terms
are significant and the model is not aliased. Sequential model sum of squares (technically “type I) shows terms of the
increasing complexity contribute to the total model. The significance of adding quadratic term to two factor interaction
(2FI) and linear terms is highest, as it has high F-value and least p-value suggesting its suitability. Lack of fit test for each
model was calculated. For the selection model, the lack-of-fit should be insignificant (smallest F value). On the basis of
the sequential model sum of squares and lack of fit test quadratic model was selected initially all quadratic terms – A, B,
C, AC, BC, A2, B2 and C2 were included in the Response Surface Model. After dropping insignificant interaction term AB,
and C2 the analysis of variance (ANOVA) presented in Table 3.
3
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Table 3:ANOVA table after dropping interaction terms
Source Sum of DOF Mean F p-value
Model Squares 7 Square Value Prob > F Remark
A-Speed 0.085 1 0.0122 24.855 < 0.0001 significant
B-Feed 1
0.0036 0.0036 7.32 0.024
C- 1.25E-05 1.25E-05 0.25 0.877
Interference
0.023 1 0.0237 48.20 < 0.0001
AC 0.0153 1 0.0153 31.14 0.0003
BC 0.0196 1 0.0196 39.71 0.0001
0.006 1 0.0060 12.25 0.0067
A2 0.0186 1 0.01864 37.77 0.0002
B2 0.0044 9 0.00049
Residual
0.0022 5 0.00044 not
Lack of Fit 0.0022 4 0.0005 0.815 0.5950 significant
Pure Error
The model F-value of 24.85 implies that the model is significant. This has significant improvement over previous model
with interaction terms. There is only a 0.01% chance that this large “Model F-Value” could occur due to noise. Value of
Prob>F” less than 0.1 indicates model terms are significant, in this case A, B, C, AC, BC, A2and B2 are significant model
terms. The “Pred R-squared” of 0.783 is a better agreement with the” Adj R-squared” of 0.9125.
Table 4: Other ANOVA Parameters after model reduction
Std. Dev. 0.022 R-Squared 0.95
Mean 0.577 0.91
C.V. % 3.84 Adj R-Squared 0.783
PRESS 0.019 Pred R- 15.52
Squared
Adeq
Precision
The final equation in terms of coded factors is,
The final equation in terms of actual factors is,
Based on the response surface model obtained after regression analysis, the results in terms of effect of burnishing
speed, feed, interference and their interaction on SR is discussed in the following subsections.
1.1 Effect of speed on surface roughness
At initial speeds, Surface roughness increases with increase in burnishing speed up to an optimum level because surface
irregularities are not deformed properly and then surface roughness decreases due to repeated deformation of surface
irregularities with increased burnished speed. Effect of speed on surface roughness is shown in Fig 1. And also it was
observed (shown in Fig 4) that at higher speed and higher interference the surface roughness is decreased drastically.
4
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Fig 1: Effect of speed on surface roughness
1.2 Effect of feed on surface roughness
On increasing the feed, surface roughness increases because for one revolution the tool is advanced linearly. Because of
this all the surface irregularities cannot be compressed. Due to this, height of irregularities still exists. Hence surface
roughn ess increases with increase in feed and it has shown in Fig 2.
Fig 2: Effect of feed on surface roughness
1.3 Effect of Interference on surface roughness
From the ANOVA table, it was found that Interference was most significant factor on surface roughness. On increasing
interference, burnishing force increases. So there by plastic deformation so height of surface irregularities minimized.
Due to plastic deformation, height of surface irregularities minimizes. And also it was observed that at higher speed and
interference the surface roughness is decreased drastically and it is shown in the Fig 4. It was observed that, even at
higher feed rate also interference was shown more dominance (Fig 5).
Fig 3: Effect of interference on surface roughness
5
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Fig 4: Interaction effect of speed and interference on Fig 5: Interaction effect of feed and interference on
surface roughness surface roughness
1.4 Single response optimization using desirability function
RSM is a sequential strategy which enables us to approach the optimal region and depict the response efficiently,
while DFA is a useful technique of analyzing of experiments in which response to be optimized. RSM and DFA have been
demonstrated to be efficient to optimize roller burnishing process parameters for surface roughness. Single response
optimization determines how input parameters affect desirability of individual response. The numerical optimization
finds a point that maximizes the desirability function. Adjusting the weight or importance may alter the characteristics of
a goal. The goal feed for response must be one of five choices: none, maximum, minimum, target or in range. The DFA is
to first transform response to a desirability function that takes values in range 0<d<1. When the response variable is at
its goal or target, d becomes 1, and if the response variable is outside the acceptable range, d becomes zero. In this
study, the target for the response is a minimum value (smaller the better), the transformation of surface roughness is
smaller the better problem. The response is transformed into d as:
d=
Where U: Upper specification limit, T: Target value, y: Response, r: Weight.
Alternative solutions of the optimization approach used to determine the optimum processing conditions are shown in
Table 5.
A contour plot for desirability was drawn keeping input parameters in range and surface roughness at minimum. The
contour plot for single desirability is shown in Fig 6. The near optimal region was located close to the left region of the
plot, which has desirability value (d=1.00) that gradually reduced moving towards right.
Table 5: Iterative determination of optimum conditions
Solutions Speed Feed Interference Ra Desirability
1 135 0.32 6 0.445 1
2 173.88 0.29 5.62 0.441 1
3 188.07 0.32 4.6 0.448 1
4 190 0.22 6 0.449 1
5 185.45 0.31 5.27 0.406 1
6 179.13 0.32 5.46 0.401 1
7 148.6 0.31 5.96 0.443 1
8 160.63 0.31 5.91 0.42 1
9 148.27 0.31 5.93 0.446 1
10 161.36 0.3 5.97 0.438 1
6
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Fig 6: The contour plot for the result of desirability function
Conclusion
In this experimental study, the evaluations of the three variables burnishing speed, feed and interference were
investigated by combining RSM and DFA. RSM with Box Behnken method was employed to evaluate the effects of
burnishing parameters on the surface roughness of the 6063 aluminium alloy. Interference has major contribution of
effect on surface roughness. It has F value of 48.3. By the increase of interference surface roughness decreases due
to increase of burnishing force. After interference, speed has major contribution of effect on surface roughness. It
has an F value of 7.32. By the increase of speed, surface roughness increases up to optimum level and then dec
reases. Feed has least effect on surface roughness. It has very small F value 0.25. Due to increase in feed, surface
roughness increases. Optimum surface roughness obtained at a speed of 190 rpm, feed of 0.22 mm/rev and
interference of 6 µm and also at 135 rpm, 0.32 mm/rev and 6 µm. The experimental results at the optimum process
parameter combination confirm the effectiveness of the response surface models for optimum burnishing
parameters. RSM approach can help manufacturers to determine the appropriate burnishing conditions, in order to
achieve specific surface roughness. RSM was found to be a useful approach and it should be recommended that this
methodology be adapted to all optimization studies.
References
[1] B.L. Juneja, G.S. Sekhon, Nitin Seth, “Fundamentals of Metal cutting & Machine tools”, revised second edition, New Age International
Publishers
[2] R.K Jain, “Production technology”, 16th Edition, Khanna Publishers.
[3] A. Stoic, I. Lackovic, J. Kopac, I. Samardzic, D. Kozak, “An investigation of machining efficiency of internal roller burnishing”, Journal of
Achievements in Materials and Manufacturing Engineering, Volume 40, pp: 188 -191, June 2010.
[4] Malleswara Rao J.N., Chenna Kesava Reddy A. and Rama Rao P.V., “The effect of roller burnishing on surface ha rdness and surface
roughness on mild steel specimens”, International Journal of Applied Engineering Research, Volume 1, No 4, 2011.
[5] S. Thamizhamanii, B. Saparudin, S.Hasan, “A study of multi roller burnishing on non ferrous metals”, Journal of Achievements in Materials
and Manufacturing Engineering, 22/2 ,pp:95-98,2007.
[6] Aysun Sagbas, “Analysis and optimization of surface roughness in the ball burnishing process using response surface methodology and
desirability function,” Advances in Engineering software 42 (2011) 992-998.
[7] R.Rajashekariah, S.Vaidyanathan, “Increasing the wear resistance of steel components by ball burnishing”, wear 43(1975) 183-188.
[8] El-Taweel, El-Axir, “Analysis and optimization of the ball burnishing process through the Taguchi techniques”. Int J Advanced
manufacturing technology 2009;41:301-310
[9] El-Axir, Othman and Abodiena “Study on the inner surface finishing of aluminum alloy 2014 by ball burnishing process”, Journal of
Materials Processing Technology 202 435-442
[10] El-Khabeery, El-Axir, “Experimental techniques for studying the effects of milling roller burnishing parameters on surface integrity”. Int J
machine tools manufacturing 2001; pp:41:175-179.
7
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
[11] El-Tayeb, K.O. Low and P.V. Brevern (2007) “Influence of roller burnishing contact width and burnishing orientation on surface quality and
tribological behavior of Aluminum 6061”, Journal of Materials Processing Technology 186 272-278.
[12] Hassan, Jalil, Ebied, “Burnishing force and number of ball passes for the optimum surface finish of brass components”. J Mater Process
Technol 1998;83:176-179.
[13] Mahmood Hassan and Z.S Al-Dhifi, “Improvements in the wear resistance of brass components by the ball burnishing process”, Journal
of Materials Processing Technology 96 73-80,1999.
[14] Mahmood Hassan, Mohammad Maqableh, “The effects of initial burnishing Parameters on non-ferrous components”, Journal of Materials
Processing Technology, 102 (2001) 115-121.
[15] Fang-Jung Shiou and Chien-Hua Chen, “Freeform surface finish of plastic injection mold by using ball-burnishing process”, Journal of
Materials Processing Technology140 248-254.1999
[16] Bonzid, Soumarer OT, Sai K. An investigation of surface roughness of burnished AISI 1042 steel. Int J Manf Technol 2004;24:454-459.
[17] J.A.Ghani, I.A. Choudhury and H.H. Hassan, “Application of Taguchi method in the optimization of end milling parameters”, Journal of
Materials processing Technology 145 84-92
[18] Adal MH, Ayman MM. The effects of initial burnishing parameters on non ferrous components. J Mater Process Technol 2000;102:115-121.
[19] Esme U, Sagas A, Kahraman F, Kulekci MK. Use of artificial neural networks in ball burnishing process for the prediction of surface
roughness of AA 7075 alloy. Mater technol 2004;24:454-459.
[20] Shiou Fj, Chen CH. Determination of optimal ball burnishing parameters for plastic injection moulding steel. Int J Adv Manuf Technol
2003;3:177-185.
[21] Montgomery, D.C. Design and analysis of experiments; 5th Edition., John Wiley & Sons Inc., Newyork.
8
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
OPTIMIZATION OF HIGH SPEED TURNING OF EN8
MILD STEEL USING TAGUCHI AND ANOVA
Ghanta Tejaswi Reddy1, B.Srinivasulu2, D.V Srikanth3, Jolly Sowjanya4
1M.Tech, (AMS), BSIT, Hyderabad.
2Asst.Professor, BSIT, Hyderabad.
3Associate Professor, AHTCE, Hyderabad.
4Asst.professor , AHTC Hyderabad.
Key words
Cutting parameters, turning, Taguchi, ANOVA, Design of experiment, signal/noise ratio
Abstract
Alloy named EN8 Mild steel is an emerging material used for many engineering applications by replacing many conventionally
used materials like steel, HSS, etc. The present work is an attempt to optimize the process parameters by using Taguchi and
analyzing by ANOVA.Taguchi is an excellent tool used for Design of Experiments in the engineering optimization process. Analy
sis of Variance (ANOVA) is a process which will compare the results of Taguchi and generates an Optimal Parametric Values. In
this paper the main focus is the effect of cutting parameters on the performance measures are clearly projected.
9
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
INTRODUCTION
Turning is a machining process in which a cutting tool, typically a non-rotary tool bit, describes a helical tool path by
moving more or less linearly while the work piece rotates. The tool's axes of movement may be literally a straight line,
or they may be along some set of curves or angles, but they are essentially linear (in the nonmathematical sense).
Usually the term "turning" is reserved for the generation of external surfaces by this cutting action, whereas this same
essential cutting action when applied to internal surfaces (that is, holes, of one kind or another) is called "boring". Thus
the phrase "turning and boring" categorizes the larger family of (essentially similar) processes. The cutting of faces on the
work piece (that is , surfaces perpendicular to its rotating axis), whether with a turning or boring tool, is called "facing",
and may be lumped into either category as a subset.
Turning can be done manually, in a traditional form of lathe, which frequently requires continuous supervision by the operator,
or by using an automated lathe which does not. Today the most common type of such automation is computer numerical
control, better known as CNC. (CNC is also commonly used with many other types of machining besides turning.)
When turning, a piece of relatively rigid material (such as wood, metal, plastic, or stone) is rotated and a cutting tool is
traversed along 1, 2, or 3 axes of motion to produce precise diameters and depths. Turning can be either on the outside
of the cylinder or on the inside (also known as boring) to produce tubular components to various geometries. Although
now quite rare, early lathes could even be used to produce complex geometric figures, even the platonic solids; although
since the advent of CNC it has become unusual to use non-computerized tool path control for this purpose.
The term mild steel applies to all low carbon steel that does not contain any alloying elements in its makeup and has
a carbon content that does not exceed 0.25%. The term “mild” is used to cover a wide range of specifications and
forms for a variety of steel. Mild steel is used in mechanical engineering applications for parts that will not be subject
to high stre ss. When in its bright cold drawn condition the steel is able to endure higher levels of stress, particularly on
smaller diameters. Compared to normal mild steel, bright mild steel provides tighter sectional tolerances, increased
straightness, and a much cleaner surface. The main advantage of cold drawn steel is that steel can be bought closer to
the finished machine size, providing reduces machining costs. Another benefit of bright steel bars is a marked increase
in physical strength over hot rolled bars of the same section.
EN8: unalloyed medium carbon steel (BS 970 080m40) has high strength levels compared to normal bright mild steel, due to
thermo mechanical rolling. EN8 is suitable for all round engineering purposes that may require a steel of greater strength.
EN8 or 080m40 can be tempered at a heat of between 550°C to 660°C (1022°F-1220°F), heating for about 1 hour for every
inch of thickness, then cool in oil or water. Normalizing of EN8 bright mild steel takes place at 830 -860°C (1526°F-1580°F)
then it is cooled in air. Quenching in oil or water after heating to this temperature will harden the steel.
Fig. MS EN8
Optimization is the act of obtaining the best result under given circumstances. The word ‘optimum’ is taken to mean
‘maximum’ or ‘minimum’ depending on the circumstances. In design, construction, and maintenance of any engineering
system, engineers have to take many technological and managerial decisions at several stages. The ultimate goal of all
such decisions is either to minimize the effort required or to maximize the desired benefitSince. the effort required or
the benefitdesired in any practical situation can be expressed as a function of certain decision variables, so optimization
can be defined as the process of findig the conditions that give the maximum or minimum value of a function.
Technically the word OPTIMIZATION means or commonly known as the process of adjusting one's trading system in an attempt
to make it more effective. These adjustments include changing the number of periods used in moving averages, changing the
number of indicators used, or simply taking away what doesn't work.So its all about getting good output in the end, as
explained in the above paragraphs about lathe machine and CNC,we can say that CNC is the most optimized form of
10
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
lathe machine.Thus an engineer shuld be more optimistic towards the projects on which he is working to get the best
results than the expectations.
The optimum searching methods are also known as mathematical programming techniques and are generally studied as a
part of operations research. Operations research is a branch of mathematics concerned with the application of scientific
methods and techniques to decision making problems and with establishing the best or optimal solutions.
Dr. Taguchi of Nippon Telephones and Telegraph Company, Japan has developed a method based on
“ORTHOGONALARRAY” experiments which gives much reduced “variance” for the experiment with “optimum settings
“of control parameters. Thus the marriage of Design of Experiments with optimization of control parameters to obtain
BEST results is achieved in the Taguchi Method. "Orthogonal Arrays" (OA) provide a set of well balanced (minimum)
experiments and Dr. Taguchi's Signal-to-Noise ratios (S/N), which are log functions of desired output, serve as objective
functions for optimization, help in data analysis and prediction of optimum results.
S/N RATIO
(I) SMALLER-THE-BETTER:
n = -10 Log10 [mean of sum of squares of measured data]
This is usually the chosen S/N ratio for all undesirable characteristics like “defects“etc. for which the ideal value is zero.
Also, when an ideal value is finite and its maximum or minimum value is defined (like maximum purity is 100% or
maximum Tc is 92K or minimum time for making a telephone connection is 1 sec) then the difference between measured
data and ideal value is expected to be as small as possible. The generic form of S/N ratio then becomes,
n = -10 Log10 [mean of sum of squares of {measured - ideal} ]
(II) LARGER-THE-BETTER:
n = -10 Log10 [mean of sum squares of reciprocal of measured data]
This case has been converted to SMALLER-THE-BETTER by taking the reciprocals of measured data and then taking the
S/N ratio as in the smaller-the-better case.
(III) NOMINAL-THE-BEST:
Square of mean
n = 10 Log10 -----------------
variance
This case arises when a specified value is MOST desired, meaning that neither a smaller nor a larger value is desirable
.Design of Experiments
A well planned set of experiments, in which all parameters of interest are varied over a specified range, is a much better
approach to obtain systematic data. Mathematically speaking, such a complete set of experiments ought to give desired
11
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
results. Usually the number of experiments and resources (materials and time) required are prohibitively large. Often the
experimenter decides to perform a subset of the complete set of experiments to save on time and money! However, it
does not easily lend itself to understanding of science behind the phenomenon. The analysis is not very easy (though it
may be easy for the mathematician/statistician) and thus effects of various parameters on the observed data are not
readily apparent. In many cases, particularly those in which some optimization is required, the method does not point to
the BEST settings of parameters. A classic example illustrating the drawback of design of experiments is found in the
planning of a world cup event, say football. While all matches are well arranged with respect to the different teams and
different venues on different dates and yet the planning does not care about the result of any match (win or lose)!!!!
Obviously, such a strategy is not desirable for conducting scientific experiments (except for coordinating various
institutions, committees, people, equipment, materials etc.).
EXPERIMENTAL PROCEDURE
The work material used was EN8 mild steel, Machining tests were carried out on present high speed VDF lathe under dry
conditions. Table 1 shows the composition of the elements present in EN 8 Mild steel.
CS F DC Cutting Feed PSNRA1 PMEAN1
393.50
Force force - 454.25
51.9248
50 0.080 0.5 450 298 -
53.2619
50 0.080 1.0 575 376
50 0.125 0.5 320 270 - 259.25
48.7063
50 0.125 1.0 331 234 - 320.00
50.0434
75 0.080 0.5 514 315 - 422.00
52.6169
75 0.080 1.0 525 452 - 482.75
53.9540
75 0.125 0.5 317 241 - 287.75
49.3984
75 0.125 1.0 440 278 - 348.50
50.7355
Taguchi based method design involves selection of response variables, independent variables their interactions and an
orthogonal array. Standard L9 orthogonal array was selected. Table 2 shows the parameters and the corresponding
levels chosen for the investigations.
C Mn Si P S Cr Mo Ni N
Min 0.35 0.60 0.05 0.015 0.015
Max 0.45 1.00 0.35 0.06 0.6
Table 1: Chemical Composition of EN8 Steel
12
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Table 2: Parameters and their levels
Parameters Levels
12
Cutting speed Vc(m/min) 50 75
Feed f (mm/rev) 0.08 0.125
Depth of cut s (mm) 0.5 1
Taguchi Analysis:
Cutting Force, Feed force versus CS, F, DC
Response Table for Signal to Noise Ratios: Smaller is better
Level CS F DC
1 -50.98 -52.94 -50.66
-51.68
2 -49.72 -52.00
Delta 0.69 3.22 1.34
Rank 3 1 2
Mean effects plot for SN Ratios Data Means:
Main Effects Plot for SN ratios
Data Means
CS F
-48
-49
Mea o S -50
n f N ratios
-51 0.080 0.125
50 75
DC
-48
-49
-50
-51 1.0
0.5
Signal-to-noise: Smaller is better
13
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
ANALYSIS OF VARIENCE (F-Test): it is tool which provides a decision at some common level as to whether the
estimators are significantly different or not
F-TEST is defined as the ratio of sample variances
F-test =
General Linear Model: Cutting Force versus CS, F, DC
Factor Type Levels Values
CS fixed 2 50, 75
F fixed 2 0.080, 0.125
DC fixed 2 0.5, 1.0
Analysis of Variance for Cutting Force, using Adjusted SS for Tests
Source DF Seq SS Adj SS Adj FP
MS
CS 1 1800 1800 1800 0.97 0.381
F 1 53792 53792 53792 28.91 0.006
DC 1 9112 9112 9112 4.90 0.091
Error 4 7444 7444 1861
Total 7 72148
S = 43.1379 R-Sq = 89.68% R-Sq(adj) = 81.95%
RSM (Response Surface Methodology):
Contour Plot of CS vs Cutting Force, Feed force Contour Plot of F vs Cutting Force, Feed force
550 CS 550 F
500 < 50 500 < 0.08
50 – 55 0.08 – 0.09
55 – 60 0.09 – 0.10
60 – 65 0.10 – 0.11
65 – 70 0.11 – 0.12
70 – 75 > 0.12
> 75
Cutting Force
Cutting Force450450
400 400
350 300 350 400 450 350 300 350 400 450
250 Feed force 250 Feed force
14
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Contour Plot of DC vs Cutting Force, Feed force
Cutting Force 550 DC 0.5
500 <
450 0.6
400 0.5 – 0.7
350 0.6 – 0.8
0.9
250 0.7 – 1.0
0.8 – 1.0
0.9 –
>
300 350 400 450
Feed force
Conclusion
The selection optimal parameter is important parameters to minimize the higher unit cost per machined part and service
life. The analysis of result showed that in turning of MS EN8 using the conceptual S/N ratio approach. In this work,
Taguchi method is used to provide an efficient design of experiment technique to obtain simple, systematic & efficient
methodology for the optimization of process
Experiment was performed used on Design of experiment L8 orthogonal array. The experiment
parameter selected are Cutting Speed (CS), Feed (F), Depth of cut (DC) and the performance measures of cutting force &
Speed force. Based on the results achieved in the Taguchi experiment & comparative volume of ANOVA (F Test) the
optimal range of machining can be achieved at CS (0.97 rpm), F (28.91 m/s) & DC (4.90 m) are found to the optimal
values. Further the experiment can be conduct by using of RSM (Response Surface Methodology)
References
1. Phadke Ms.Quality engineering using robust design (Prenitice –Hall, Englewood Cliffs.NJ),1989.
2. Ross P J taguchi techniques for quality engineering (McGraw-Hill, new York),1988
3. Choudary I.A & EI –Baradie MA,Jmatter process technol77(1998)278-284)
4. Li I, He N, Wang Z G. j Mater process technol, 129 (2002) 127-130
5. Davim J Paulo j Mater process technol, 116 (2001) 305-308.
6. Yang W H & tarng Y s. mater process technol 63(1997) 199-204)
7. León, R V; Shoemaker, A C & Kacker, R N (1987) Performance measures independent of adjustment: an explanation and extension of
Taguchi's signal-to-noise ratios (with discussion), Technometrics vol 29, pp. 253–285
8. Moen, R D; Nolan, T W & Provost, L P (1991) Improving Quality Through Planned Experimentation ISBN 0-07-042673-2
9. Nair, V N (ed.) (1992) Taguchi's parameter design: a panel discussion, Technometrics vol34, pp. 127–161
10.Bagchi Tapan P and Madhuranjan Kumar (1992) Multiple Criteria Robust Design of Electronic Devices, Journal of Electronic Manufacturing, vol
3(1), pp. 31–38
15
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Investigation of Annealing-Treatment on Structural
and Optical Properties of Sol-Gel-Derived Zinc
Oxide Thin Films
1 S.Jaya Krishna, 2 Padmavathi Reddy
1Research Scholar, Sree Visvesvaraya Institute of Technology & Science, Mahabubnagar, A.P., Ind
2 Assistant Professor, Jaya Prakash Narayan College of engineering, Mahabubnagar, A.P., India
[email protected]
Keyword:
ZnO films; SE spectra; optical constants; optical band gap.
Abstract:
The transparent ZnO thin films were prepared on Si(100) substrates by the sol-gel method. The structural and optical properties of ZnO thin films,
submitted to an annealing treatment in the 400-700°C ranges are studied by X-ray diffraction (XRD) and UV-visible spectroscopic ellipsometry (SE).
XRD measurements show that all the films are crystallized in the hexagonal wurtzite phase and present a random orientation. Three prominent peaks
⋅ ⋅5°), ⋅3°) appear on the diffractograms. The crystallite size increases with increasing annealing temperature.
These modifications influence the optical properties. The optical constants and thickness of the films have been determined by
analysing the SE spectra. The optical bandgap has been determined from the extinction coefficient. We found that the refracti
ve index and the extinction coefficient increase with increas- ing annealing temperature. The optical bandgap energy decreases
with increasing annealing temperature. These mean that the optical quality of ZnO films is improved by annealing.
16
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
1. Introduction:
Studies on the effect of annealing on structural and optical properties, which are important parameters to be taken Zinc
oxide has many interesting optoelectronic properties such as a wide bandgap, a large exciton binding energy of 60 meV,
and high transparency in the visible region. These properties have made way for many applications ranging from light
emitting diodes, UV laser, transparent field effect transistors, solar cells and other optical coat- ing applications
(Natsume and Sakata 2000; Hoffman et al 2003; Osinsky et al 2004). Due to the various attractive properties for practical
applications of ZnO films, there has been much attention paid on the fabrication of ZnO films in recent years. A variety of
techniques have been employed to fabricate ZnO thin films such as pulsed laser deposition (Kang et al 2004), rf
magnetron sputtering (Lee et al 2000), chemical vapour deposition (Fay et al 2005), spray pyrolysis (Romero et al 2004)
and sol-gel process (Chen et al 2005; Logothetidis et al 2008; Xuea et al 2008). Despite the crystalline quality being
inferior to other vacuum deposition techniques, the sol-gel pro- cessing still offers the possibility of preparing a small as
well as large-area coating of ZnO thin films at low cost into consideration when we look for optoelectronic appli- cations
for these films. In this work, we deal with struc- tural and optical properties of sol-gel derived ZnO thin films. As a
consequence, the influence of annealing treatments on the structural and optical properties will also be investigated.
ZnO films were deposited at room temperature on Si (100) substrates by the sol-gel method. The key optical constants
(refractive index, extinction coefficient) for the ZnO films were measured by spectroscopy ellipsometry (SE). SE is a non -
destructive and sensitive optical tech- nique that has been widely recognized as a reliable tool for characterizing optical
properties of thin films (Azzam and Bashara 1977). In the analysis of the SE spectra, a four-phase model was employed, in
which the optical properties of ZnO films were represented by the Forouhi- Bloomer (1986) model. Our main objective
was to deter- mine the optical properties of ZnO thin films as a func- tion of annealing temperature for technological
applications. Even though there are ear- lier reports on the optical and structural properties of ZnO.
2. Experimental Work:
Films deposited by sol-gel (Chen et al 2005; Logothetidis et al 2008; Xuea et al 2008), there are a few systematic Figure 1
shows the flow chart of the preparation of the ZnO films. High-purity Zn-acetate dihydrate (Zn
(CH3COO)2⋅2H2O, 99⋅9%) was used as starting materials; methylglycol (CH3OCH2CH2OH) was selected as a solvent, and diethanolamine (C4H11NO2)
was used as stabi- lizing agent. The concentration of zinc acetate was chosen to be 1 mol l-1, and the resulting mixture was stirred for 1 h at 65°C, and
then 3 h at room temperature to yield a clear and homogeneous solution, which served as the someter. Because the details on SE can be found in the
literature (Azzam et al 19 77), only a brief description is given here. In ellipsometry, the object is to measure the ratio of the complex Fresnel reflection
solution
was spun-coated onto Si(100) wafer for different measurements.
Spin coating was performed at room temperature, with a rate of 3000 rpm for 20 s and then of 5000 rpm for another 10 s. Si
(100) substrates were ultrasonically cleaned with acetone and methanol, then etched for 1 min in 2% HF water solution in
order to remove the surface oxide, followed by de-ionized water and dried in nitrogen before spin coating. After depositing by
spin coating, the films were dried at 300°C for 10 min on a hotplate to evaporate the solvent and remove organic residuals. The
procedures from coating to drying were repeated many times until the desired thickness of the films was reached. The films
were then inserted into a furnace and annealed in air at 400-700°C for 2 h. The quantity, rp (rs), is the
traditional ellipsometric angles. Note that both rp and rs contain information on the sample of optical and structural -
analyser ellipsometer, by Fourier analysis of curves of the light intensity vs the azimuthal angle of the rotating analyser.
The SE measurements were carried out at room
temperature in the 1⋅5-4⋅71 eV (corresponding to 263-827 nm) photo energy region at 0⋅03 eV intervals, operated at an angle of incidence of 70° and an azimuthal angle of the
polarizer of 45°.
The film thickness was measured by profilometer
17
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
3. Results and Discussions:
(Dektak 3030). The crystal structure of the ZnO films was measured by using RIGAKU, D-MAX 2200 VPC. X-ray diffractometer equipped with a Cu- ⋅540562 Å)
source. The optical properties of the films were characterized by a high precision photometric ellipse. Figure 2 shows X-ray diffraction patterns of ZnO thin films
deposited on Si(100) substrate by spin-coating at different annealing temperatures of 400, 500, 600, and 700°C. At 400°C annealing, small peaks were
characteristics of the ZnO wurtzite. From figure 2, (100), (002), (101), (102), (110) and (103) XRD peaks were observed,
and it is concluded that all the films were poly- crystalline with a hexagonal wurtzite structure and a random oriention,
which generally occurs in the growth of ZnO thin films (Xuea et al 2008). The degree of c-axis orientation of the ZnO thin
films was strongly dependent Figure 1. Flow chart for the preparation of ZnO thin film Figure 2. X -ray diffraction patterns
of ZnO films with diffe- using sol-gel method. rent annealing temperatures.
Table 1. List of samples studied.
Thickness (nm) ZnO annealing Determined Determined Particle temperature by profilometer by SE sizes (nm) Eg (eV) 400°C 192 190⋅2 - 3⋅405
150606°⋅1C33⋅177433 ⋅315740⋅780204°⋅C151553154⋅3⋅26304⋅366030°⋅C343163
It increases as the annealing temperature increases. The average ZnO thin film particle sizes were calculated using the full
width at half maximum (FWHM) of (002) peak from the Scherrer's method and were presented in table 1. The calculated values
of the crystallite sizes ranged between 20 and 40 nm. It was observed that crystallite size increased with increasing annealing
temperature, which can be under- stood by considering the merging process induced from thermal annealing. For ZnO
nanoparticles, there are many dangling bonds related to the zinc of oxygen defects at the grain boundaries. As a result, these
defects are favourable to the merging process to form larger ZnO grains while increasing the annealing temperature. The ZnO
thin film thicknesses are also presented in table 1. It was observed that increasing the annealing temperature resulted in a
decrease in film thickness from 190 nm (400°C annealing) to 154 nm (700°C annealing).
The FWHM of (002) plane of ZnO thin film with various annealing temperatures is also compared. As the annealing
temperature increases from 400-700°C, the FWHM value exhibits a tendency to decrease, which can be attributed to the
coalescences of grains at higher annealing temperature (Gupta and Mansingh 1996). As a result, it implies that the
crystallinity of the ZnO thin films is improved at higher annealing temperatures. Other work- ers (Moustaghfir et al 2003;
Kuo et al 2006) have also observed the improvement in crystallinity of the ZnO thin films with the increase of annealing
temperature. These may be due to high annealing temperature providing energy to crystallites gaining enough energy to
orient in proper equilibrium sites, resulting in the improvement of crystallinity and degree of orientation of the ZnO
films. The SE spectra for ZnO thin films annealed at various temperatures are shown in figures 3(a)-(d). The spectra can
be separated into two spectral ranges. In the lower energy range, the spectra exhibit oscillations. This is only due to the
interference effect of light, and this indicates that the films are transparent. The oscillating frequency depends on the
thickness of the film. Generally, the thicker the film, the higher is the frequency. In the higher energy range, no interfere
nce oscillation is seen because of the occurrence of light absorption resulting from the interband transition in ZnO films.
-phase model
(air/ZnO + voids/ZnO/substrate). The presence of a (ZnO + voids) layer is due to the surface roughness, which was modeled using a
Bruggeman effective medium approximation (Bruggeman et al 1933) consisting of voids and ZnO. This feature is common for sol-gel- derived
oxide thin films (Raham et al 1998). The optical constants of ZnO were parametrized using the Forouhi-Bloomer models, given by (Forouhi and
Bloomer 1986) tive index (n) and extinction coefficient (k) obtained from the fitted parameters for ZnO thin films annealed at vari- ous
temperatures. From figure 5, we find that the optical constants spectra of all ZnO films show a high refractive index (n = 1⋅8 ~ 2⋅3) in the UV-
visible region tahsewpehlol atosntheenfeurngydainmcerenatasel dabfsroomrpt1io⋅5nteod4g⋅e7ineVt.hTehneeeaxrtuinltcrtaiovinolceoterfefigciioenn.tTohfethreeffrialmctsiviencinredaesxefsirasst increased and then
decreased as photon energy increases,
consistent with Kramers- Kronig relations. The extinction coefficients are very small at lower photon energy, where the films are nearly
transparent. Our refractive index is smaller than the ave- rage value of ZnO bulk (refractive index ~ 2) in the visible wavelength range. This
slight variation might be attributed to the porous nature of the sol-gel derived thin films. On the other hand, we also find different values of
the refractive index and extinction coefficient due to the different annealing
18
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
temperatures. The refractive index of the ZnO films increases with increasing annealing temperature. This conforms to the
results already reported by Moustaghfir et al (2003). A higher annealing tempera- ture enhances the formation of larger and
more closely packed crystals. The increase of the refractive index with increasing annealing temperature can be partly
attributed to improvement in film quality with the reduction in porosity in the ZnO film with anneal ing. From the spectral
relationship between extinction co- efficient and photon energy (as shown in figure 5), the extinction coefficient of ZnO thin films can be divided into three regions: Region I
having the low-energy regions (E = 1⋅5 ~ 2⋅7 eV, below the fundamental band -gap energy, Eg); regions II and III the high-energy regions (E = 2⋅7 ~ 3⋅5 and 3 ⋅5 ~ 4⋅7 eV, above
Eg). No obvious absorption is found below the fundamental bandgap where E is the photon energy and the fitting parameters are A, B, C, D
simulated annealing optimiza- tion (Kirkpatrick et al 1983), the fitting parameters as well as the film thicknesses were
determined by fitting the ellipsometric spectra. Figure 4 shows a fit for ZnO thin film annealed at a temperature of 700°C.
Clearly, the fit shown in figure 4 is a good fit. The thicknesses of ZnO films obtained by SE closely match with those obtain ed by
profilometer (as shown in table 1). Therefore, it is verified that our model adequately describes the measured data. Energy, Eg,
so the extinction coefficient is zero unoccupied state of the conduction band when a photon except when the photon energy
increased close to th
nd
to the conduc- energy, Eg, as (Tauc 1974) band, or a transition from a band to an impurity level. It is interesting that the extinction coefficient increases
as the annealing temperature also increases in regions II and III. The values of the extinct ion coefficient, as we know, result from ⋅ - Eg), where
-energy region. The
possibility of scattering of the grains increases as the annealing temperature increases, which can also be attributed to
the increase of grain size and surface density of the films. The extinction coefficient near the fundamental bandgap
energy is not zero because of the noise, and the results demonstrated that some other absorption is present in addition
to the fundamental transition.
The optical bandgap energy, Eg, of ZnO films annealed at various temperatures were calculated by considering a
direct allowed electronic transition between the highest occupied state of the valence band and the lowest the Eg values
figure 6 shows the optical bandgap energy for ZnO thin film annealed at a temperature o f 700°C, Eg = 3⋅343 eV. Figure 7 shows the variation of Eg with annealing temperature. The Eg value is found to decrease from 3⋅405- ,
rapolation of the
s of
ZnO films as a func- different annealing temperatures. tion of annealing temperature.
3⋅343 eV with increasing annealing temperature from400-700°C. The decrease in Eg indicates an improvement of the quality of the film due to
the annealing out of the structural defects. This is in agreement with the experimental results of XRD analysis. According to XRD results, the
mean grain size increased with increased annealing temperature. As grain size increased, the grain boundary density of a film decreased,
subsequently, the scattering of carriers at grain boundaries decreased (Lee et al 2003). A continuous increase of optical constants and also a
shift in absorption edge to a higher wavelength with increasing annealing temperature may be attributed to the improvement in the
crystalline quality of the films along with reduction in porosity.
The decrease in optical bandgap energy is generally observed in the annealed direct-transition-type semiconductor films. Hong et al (2005)
observed an optical band gap shift of ZnO thin films from 3⋅31-3 ⋅26 eV after annealing, and attributed this shift to the increase of the ZnO
grain size. Chaparro et al (2000) ascribed this 'red shift' in the energy gap, Eg, to an inc rease in crystallite size for the annealed ZnSe films. Bao
and Yao (2001) also reported a decrease in Eg with increasing annealing temperature for SrTiO3 thin films, and suggested that a shift of the
energy gap was mainly due to both the quantum -size effect and the existence of an amorphous phase in thin films. In our case, the mean cry
stallite size increases from 20 to 40 nm after annealing from 400-700°C. Moreover, it is understood that the amorphous phase is reduced with
increasing annealing temperature,
since more energy is supplied for crystallite growth, thus resulting in an improvement in crystallinity of the ZnO films.
Therefore, it is believed that both the increase in crystallite size and the reduction in amorphous phase amount are
responsible for the bandgap decreasing in annealed ZnO films. The change of refractive index, n, e xtinction coefficient,
k, and optical bandgap energy, Eg, reveals the impact on optical properties of thermal treatment.
19
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
To decrease from 3⋅405 to 3⋅343 eV as annealing temperature is increased from 400-700°C. These optical property modifications can be
attributed to structural modifications with different annealing temperatures. Acknowledgements The authors would like to gratefully
appreciate the finan- cial support from the Science and Technology Planning Project of Guangdong Province of China (Grant No.
2008B010600041) and the Natural Science Foundation of Guangdong Province of China (Grant No. 8151027501000068).
4. Conclusions:
The effect of post-annealing treatment on structural and optical properties has been critically examined in sol-gel
derived ZnO films on Si(100) by XRD and SE in the UV- visible region. XRD spectra and refractive index of the films reveal
that the crystalline nature of the films has significantly improved with annealing. The optical con- stants and optical
bandgap energy of ZnO films were dependent on the annealing temperature. The thickness values obtained from SE
measurement compare well with the surface profilometer data. The refractive index for ZnO films increases as the
annealing temperature is increased. The optical bandgap energies have been found.
5. References:
1. Azzam R M A and Bashara N M 1977 Ellipsometry and polarized light (Amsterdam: North-Holland) p. 89 Bao D and Yao X 2001 Appl.
Phys. Lett. 79 3767
2. Bruggeman D A G 1933 Ann. Phys. Leipzig 24 636
3. Chaparro A M, Martinez M A, Guillen C, Bayon R, Gutierrez
4. M T and Herrero J 2000 Thin Solid Films 361 177
5. Chen S Q, Zhang J and Feng X 2005 Appl. Surf. Sci. 241 384 Fay S, Kroll U and Bucher C 2005 Sol. Energy Mater. Sol. Cells 86 385
6. Forouhi A R and Bloomer I 1986 Phys. Rev. B34 7018 Gupta V and Mansingh A 1996 J. Appl. Phys. 80 1063
7. Hoffman R L, Norris B J and Wager J F 2003 Appl. Phys. Lett. 82 733
8. Hong R, Huang J, He H, Fan Z and Shao J 2005 Appl. Surf. Sci. 242 346
9. Jellison Jr. G E 1992 Opt. Mater. 1 41
10. Kang H S, Kang J S, Kim J W and Lee S Y 2004 J. Appl. Phys. 95 1246
11. Kirkpatrick S, Gelatt C D and Vecchi M P 1983 Science 220, 671
12. Kuo S Y, Chen W C and Cheng C P 2006 Superlattices and Microstr. 39 162
13. Lee J C, Kang K H and Kim S K 2000 Sol. Energy Mater. Sol.Cells 64 185
14. Lee J H, Ko K H and Park B O 2003 J. Cryst. Growth 247 119
15. Logothetidis S, Laskarakis A, Kassavetis S, Lousinian S,
16. Moustaghfir A, Tomasella E, Ben Amor S, Jacquet M, Cellier J
17. and Sauvage T 2003 Surf. Coat. Technol. 174/175 193
18. Natsume Y and Sakata H 2000 Thin Solid Films 372 30
19. Osinsky A, Dong J W, Kauser M Z, Hertog B, Dabiran A M
20. and Chow P P 2004 Appl. Phys. Lett. 85 4272
21. Raham M M, Yu G L, Krishna K M, Soga T, Wantanable J J,
22. Jimbo T and Umeno M 1998 Appl. Opt. 37 691
23. Romero R, Lopez M C and Leinen D 2004 Mater. Sci. Eng.B110 87
24. Tauc J C 1974 Amorphous and liquid semiconductor (New York: Plenum Press) p. 159
Xuea S W, Zu X T, Zhouc W L, Denga H X, Xiang X, Zhang L and Dengd H 2008 J. Alloys Compds. 448 21
20
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Lean Manufacturing a Tool for Strategic Management
Dr.A.C.S.Kumar
Principal,Abhinav Hitech college of Engineering,Hyderabad
ABSTRACT
The core idea of Lean Manufacturing is to maximize the customer value while minimizing avoidable investment on inventory.
The term ‘Lean’ means fewer inventories which are just sufficient to maintain uninterrupted manufacturing. Lean
Manufacturing is a via – media – approach between the age – old idea of Craft Production and the modern idea of Mass
Production of Gerald Ford. It simply combines the advantages of Craft Production and Mass Production. A lean
manufacturing organization understands the customer value and focuses its key processes to continuously increase it. The ultimate
goal is to provide perfect value to the customer through a process that has zero waste. To accomplish this, a lean manufacturing
organization changes the focus of management from optimizing separate technologies, assets, and vertical departments to optimizing
the flow of products and services through entire value streams that flow horizontally across technologies, assets, and departments to
customers. Lean manufacturing is a business model and collection of tactical methods that emphasize eliminating non-value added
activities (waste) while delivering quality products on time at least cost with greater efficiency. In the U.S., lean implementation is
rapidly expanding throughout diverse manufacturing and service sectors such as aerospace, automotive, electronics, furniture
production, and health care as a core business strategy to create a competitive advantage. Lean manufacturing means doing more
with less by employing 'lean thinking.' Lean manufacturing involves never ending efforts to eliminate or reduce 'muda' (Japanese for
waste or any activity that consumes resources without adding value) in design, manufacturing, distribution, and customer service
processes. Developed by the Toyota executive Taiichi Ohno (1912-90) during post-Second World War reconstruction period in Japan,
and popularized by James P. Womack and Daniel T. Jones in Their 1996 book ‘Lean Thinking. 'Also called lean production. This paper
deals about the relationship between lean manufacturing and strategic management.which was was suggested and implemented by
Eijji Toyada and Ohno.
21
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
INTRODUCTION
Peter Drucker dubbed it as :“ The industry of Industries”. After world war I,Henry Ford and General Motors’ Alfred Sloan moved
the World Manufacture from Centuries of CRAFTPRODUCTION –led by the European firms – into the age of MASS
PRODUCTION.As a result, the United States soon dominated the global economy. After World War II, Eiji Toyada and Taiichi
Ohno at the Toyota motor company in Japan pioneered the Concept of LEAN PRODUCTION. General motors’ the world’s largest
industrial concern and without doubt the best at Mass production. Now, in the age of lean production, it finds itself with too
many managers, too many workers, and too many plants.
Figure 1 : The history of production (based on Chase, Aquilano, Jakobs, 1998)
The mass producer uses Narrowly Skilled Professionals to design products made by Unskilled or Semi -skilled Workers tending
Expensive, Single Purpose Machines. These churn out standardized products in very high volume.
FEATURES OF MASS PRODUCTION:
Because the machinery costs so much and is so intolerant of disruption the Mass Production adds so many → Buffers –Extra
supplies, Extra workers and Extra space to assure smooth production. Because changing over to a new product costs even more
→ the mass producer keeps standard designs in production for as long as possible. The result: The consumer gets Lower costs →
BUT - at the expense of variety - and by means of → Work Methods →that most employees.
The Lean Producer → by contrast → combines the advantages of craft and mass production while avoiding the high cost of the
former → and the rigidity of the latter. Toward this end → Lean Producers employ teams of Multi skilled workers at all levels of
the organization → and use → Highly Flexible Automated Machines → to produce volumes of products in enormous variety.
Mass-producers →set a limited goal for themselves → “GOOD ENOUGH” → which translates into →an ptacceblenumber of
defects, a maximum acceptable level of inventories, a narrow range of standardized products. To do better → they argue →
would cost too much or exceed inherent human capabilities.
Fig 2 : Process operation Vs Elapsed-Time in Mass Production
Henry Ford designed his first moving assembly line in 1913, and revolutionised the manufacturing processes of his Ford Model T. This
assembly line, at the first Ford plant in Highland Park, Michigan, became the benchmark for mass production methods around the
world. It was Henry's intention to produce the largest number of cars, to the simplest design, for the lowest possible cost. When car
ownership was confined to the privileged few, Henry Ford's aim was to "put the world on wheels" and produce an affordable vehicle
for the general public. In the early days, Ford built cars the same way as everybody else – one at a time. The car sat on the ground
throughout the build as mechanics and their support teams sourced parts and returned to the car to assemble it from the chassis
upwards. To speed the process up, cars were then assembled on benches which were moved from one team of workers to the next.
But this was not fast, as Ford still needed skilled labour teams to assemble the 'hand-built' car. So production levels were still low and
the price of the car was higher to cover the costs of mechanics. To achieve Henry Ford’s goal of ma ss consumption through mass
production, productivity needed to increase. At the Detroit factory in Michigan, workers were placed at appointed stations and the
chassis was hauled along between them using strong rope. The chassis stopped at each station, where parts were fitted, until it was
finally completed. Henry Ford had built on the basic principles of early pioneers such as Elihu
22
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Root, who masterminded an assembly system for Samuel Colt, which divided the manufacturing process in order to simplify
it.He continued experimenting until every practice was refined, and his mass production vision became a reality.
Another initiative was to use interchangeable parts that could be put together easily by unskilled workers. The experiments continued
with gravity slides and conveyors. Naturally, even the placement of men and tools was meticulously researched to ensure the
production line ran as efficiently as possible. Each department, in the manufacturing process was broken down into its constituent
parts. These sub-assembly lines were set up in each area until, as Henry was heard to remark, "everything in the plant moved." As a
result, production speeds increased – sometimes they were up to four times faster. This combination of accuracy, continuity and
speed introduced mass production to the world. At Highland Park, Model T production reached record levels, with a complete car
leaving the line every 10 seconds of every working day. Ford was able to cut prices, double the minimum daily wa ge to $5, produce a
superior product and still make a profit. At this time, two million Model Ts were being produced by Ford each year and sold at just
$260 – a very affordable price for its time. The Model T started a rural revolution. The $5 day wage and the philosophy behind it,
started a social revolution. The moving assembly line started an industrial revolution.But the Problem with Mass Production is that if
one operation in line stops entire process in the plant will be effected.For Over coming this prob lem Toyada and Taiichi Ohno at the
Toyota motor company in Japan pioneered the Concept of LEAN PRODUCTION.
Lean production is “LEAN”→because it uses HALF of everything→(as compared with mass production) Half the Human effort in the
factory, Half the manufacturing space, Half the investment in tools, Half the engineering hours to develop a new product, IN HALF THE
TIME. Also it requires →Keeping far less than half the needed inventory on site– AND –Results in many fewer defects and products.
The Most Striking Difference between Mass Production and Lean Production → lies in →Their Ultimate Objectives.
Fig 3 :Searching Keywords tree of Lean Production
What is LEAN PRODUCTION? Perhaps the best way to describe this innovative production system is to contrast it with Craft
Production and Mass production- the two other methods humans have devised to make things. The craft producer uses highly
skilled workers and simple but flexible tools to make exactly what the consumers ask for –AND-one item at a time.e.g. Custom
furniture, works of decorative art, and a few exotic sports cars provide current day examples. The Problem with Craft
Production : Goods produced by the craft method - as automobiles once were exclusively –cost too much for most of us to
afford. So Mass Production was developed at the beginning of twentieth century as an alternative.
Philosophy Of Mass Production.
Fig 4 : Beginning of Lean Production
LEAN PRODUCERS → on the other hand → set their sights Explicitly on PERFECTION which results in Continually declining costs,
23
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Zero defects, Zero inventories– and –Endless product variety. Of course no lean producer has ever reached this promised Land - and
perhaps none ever will - but the endless quest for perfection continues to generate surprising twists. For one, lean production changes
how people work but not always in the way we think. Most including the so-called blue-collar workers - will find their jobs more
challenging as lean production spreads. And they will certainly become more productive. At the same time, they may find their work
more stressful, because a key objective of lean production is to push responsibility far down the organizational ladder.. Responsibility
means freedom to control one’s work - a big plus- but it also raises anxiety about making costly mistakes .
Fig 5 : Manufacturing Performance (Anecdotal) Fig 6 : Flow of Lean Production
LEAN PRODUCTION CALLS FOR →Learning far more professional skills - and Applying these creatively in a team setting rather
than in a rigid hierarchy. The Paradox is that the better you are at team-work, the less you may know about a specific, narrow
specialty that you can take with you to another company or to start a new business.
What ’s more Many employees may find the lack of a steep career ladder with ever more elaborate titles and job descriptions -
both - Disappointing - and Disconcerting. If employees are to prosper in this environment, companies must offer them a
Continuing Variety of Challenges. That way → they will feel that →They are honing their skills - and - are valued for many kinds
of expertise they have attained. Without these continual challenges, workers may feel they have reached dead end at an early
point in their career..In the result They hold back their know-how and commitment -and thereby. The main advantage of Lean
manufacturing disappears. “The Elements of Lean Production” looks at how lean production works in factory operations,
product development, supply -system coordination, customer relations and as a total lean enterprise.
Fig 7 : Lean Production Conceptualization
DIFFUSING LEAN PRODUCTION, tells us how lean production is spreading across the world and to other industr ies and, in the
process, are revolutionizing how we live and work. However lean production isn’t spreading everywhere at uniform rate. So we’
ll look at the barriers that are preventing companies and countries from becoming lean. And we’ll suggest creative leanness can
be achieved.
24
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
THE RISE OF LEAN PRODUCTION
The Birth place of lean production:“THE MOST JAPANESE”OF THE JAPANESE AUTO COMPANIES. The founding Toyota family
succeeded first in the textile machinery business during the late nineteenth century by developing superior technical features
on its loom. In the late 1930’s at the government’s urging, the company entered the motor vehicle industry, specializing in tric
ks for military. It had barely gone beyond building a few prototype cars with craft methods before war broke out and auto
production ended. After the war Toyota was determined to go into full-scale car commercial truck manufacturing, but it faced a
host of problems.
The Problems faced by Toyota to go into full scale car manufacturing are , The domestic market was TINY- and - demanded a wide
range of vehicles – e.g., Luxury cars for government officials, Large trucks to carry goods to market, Small trucks for Japan’s small
farmers, and Small cars suitable for Japan’s crowded cities and high energy prices. Furthermore, in Japan there were
O“GUESTWORKERS”-- that is – Temporary Immigrants willing to put up with Sub-standard Working Conditions → in return for High Pay
– or - Minorities → with limited occupational choice. In the west, by contrast, these individuals had formed the core of the work force in most
mass-production companies. The war-ravaged Japanese economy was starved for capital and for foreign exchange, meaning that massive
technology were quite impossible.
purchases of the latest western production The outside world was full of huge motor vehicle
producers who were anxious to establish operations in Japan and ready to defend their established markets against Japanese exports.
The two pillars needed to support the TPS are just-in-time (JIT) and autonomation (Jidoka)or automation with a human touch.
Figure 8 : The Toyota Production System (based on Sears, Shook, 2004)
THE SOLUTION THOUGHT PROCESS:
Toyota’s chief production engineer, Taiichi Ohno, quickly realized that employing Detroit ’s tools-and Detroit ’s methods –
was not suited to this strategy. Craft production methods were a well known alternative, but seemed to lead nowhere for a
company intent on producing mass-market products.Ohno knew he needed a New Approach - and - he found it. We can look
at the stamping shop for a good example of how his new techniques worked.
LEAN PRODUCTION: A CONCRETE EXAMPLE
Across the world, nearly all motor vehicle bodies are still produced by welding together about 300 metal parts stamped from sheet
steel. Auto makers have produced these “stampings” by employing one of two different methods. A few tiny craft producers, such as
Aston Martin, cut sheets of metal – usually aluminium to a gross shape, then beat these blanks by hand on a die to their final shapes.
Alternatively - any producer making more than a few hundred cars a year a category - starts with a large roll of
sheet steel. They run this sheet through an automated “blanking” press to produce a stack of flat blanks slightly larger
than the final part they want. They then insert the blanks in massive stamping presses containing matched upper and lower
dies. When these dies are pushed together under thousands of pounds of pressure, the two- dimensional blank takes the three-
dimensional shape of a car fender or a truck door as it moves through a series of presses.
Fig 9: The Toyota Production System (based on Convis, 2001) Fig 10 : The seven wastes (based on Ohno, 1988)
25
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
THE PROBLEM WITH THIS SECOND METHOD : The minimum scale required for economical operation. The massive and
expensive western press lines were designed to operate at about twelve strokes per minute, three shifts a day, to make a
million or more of a given part in a year. Yet in the early days, Toyota’s entire production was a few thousand vehicles a ye ar..The dies
could be changed so that the same press line could make many parts but doing so presented major difficulties, The dies
weighed many tonnes each, and workers had to align them in the press with absolute precision. A slight misalignment could
produce a night mare in which the sheet metal melted in the die,necessitated extremely expensive and time consuming repairs.
TO AVOID THESE PROBLEMS :
Detroit,Wolfsburg,Flins and Mirafiori assigned die changes to specialists. Die changes were under-taken methodically and typically
required a full day to go from the last part with theold dies to first acceptable part from the new dies. As volumes in the western
industry soared after world war II ,the industry found an even better solution to the die change problem. Manufacturers found they
often could “dedicate “ a set of presses to specific part and stamp these parts for months, or even years without changing dies.
To Ohno, however this solution was no solution at all. The dominant Western practice required hundreds of stamping presses to make
all the parts in car and truck bodies, while Ohno’s capital budget dictated that practically the entire car be stamped from few
press lines .His idea was to develop simple die change techniques and to change dies frequently- every two to three
hours versus two to three months using rollers to move dies in and out of position and simple adjustment mechanisms.
Fig 11 :Evaluation of Lean At Tayota
Because the new techniques were easy to master and production workers were idle during the die changes ,Ohno hit upon the
idea of letting the production workers perform the die changes as well.By purchasing a few used American presses, Ohno
eventually perfected his technique for quick changes. By the late 1950’s, he had reduced the time required to change dies from
a day to an astonishing three minutes and eliminated the need for die change specialists. In the process, He made an
unexpected discovery-it actually cost less per part to make small batches of stampings than to run off enormous lots.
There were two reasons for this phenomenon (1)Making small batches eliminated the carrying cost of the huge inventories of finished
parts that mass -production systems required.(2)Even more important, making only a few parts before assembling them into a car
caused stamping mistakes to slow up almost instantly. The consequences of this latter discovery were enormous: It made those in the
stamping shop much more concerned about quality. It eliminated the waste of large numbers of defective parts which had to be
repaired at great expense or even discarded-that were discovered only long after manufacture. But to make this system work at all-a
system that ideally produced two hours or less of inventory Ohno needed both an extremely skilled –and -a highly motivated work
force.If workers failed to anticipate problems before they occurred and didn’t take the initiative to devise solutions, the work of the
whole factory could easily come to a halt. Holding back knowledge and effort- repeatedly noted by industrial sociologists as a salient
feature of all production systems would swiftly lead to disaster in Ohno’s factory.As it happened,Ohno’s work force acted to solve this
problem for him in the late 1940’s.Because of macro-economic problems in Japan the occupying Americans had decided to stamp out
inflation through credit restrictions, but overdid itand caused a depression instead.
THE RESULT :
Toyota found its nascent car business in a deep slump and was rapidly exhausting loans from its bankers. The founding family,
led by president Kiichiro Toyoda,proposed as a solution to this crisis - Firing a quarter of the work force.However,the company
quickly found itself in the midst of a revolt that ultimately led to its workers occupying the factory..Moreover, the company ’s union
was in a strong position to win the strike. In 1946,when the Japanese government, under Americanprompting, strengthened the
26
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
rights of unions, including management, and then imposed severe restrictions on the ability of company owners to fire the
workers, the balance of power had shifted to the employees.
After protracted negotiations, the family and the union worked out a compromise that today remains the Formula for Labor
Relations in the Japanese auto industry. A quarter of the work force was terminated as originally proposed. But kiichiro toyoda
resigned as president to take responsibility for the company ’s failure - and The remaining employees received two
guarantees.One was for lifetime employment.The other was for pay steeply graded by seniority rather than by specific job
function and tied to company profitability through bonus payments.
Back at the factory aiichi Ohno realized the implications of this historic settlement: The work force was now as much a short
term fixed cost as the company ’s machinery and in the long term, the workers were an even more significant fixed cost. After
all, old machinery could be depreciated and scrapped, but Toyota needed to get the most out of its human resources over a
forty year period. That is From the time new workers entered the company, which in Japan is generally between the ages of
18&22,until they reached the retirement at age 60.So it made sense to continuously enhance the workers skills & to gain the
benefit of their knowledge & experience as well as their brawn. In short they became members of the Toyota community, with
a full set of rights, including the guarantee of lifetime employment &access to Toyota facilities(housing, recreation & so
forth),that went far beyond what most unions had been able to negotiate for mass production employees in the west. In
return ,the company expected that most employees would remain with Toyota for their working lives.
At Toyota city Ohno began to experiment. The first step was to group workers into teams with a team leader rather than
foremen. The teams were givena set of assembly steps that piece of the line and told to work together on how best to perform
necessary operations. The team leader would do assembly tasks as well as co-ordinate the team, and, in particular would fill in
for any absent work – concepts unheard of in mass production plants.
FINAL ASSEMBLY PLANT
Ohno next gave the team the job of housekeeping minor tool repair, and quality checking. Finally, as the last step after the teams were
running smoothly, he set time aside periodically for the team to suggest ways collectively to improve the process.( in the west the
collective suggestion process would come to be called “quality circles”).This continues incremental improvement process,Kaizen in
Japanese, took place in collaboration with industrial engineers, who still existed but in much smaller numbers. when it came to
“Rework”, Ohno’s thinking was truly inspired. He reasoned that the mass production practice of passing on errors to keep the line
running caused errors to multiply endlessly. Every worker could reasonably think that errors would be caught at the end of the line
and that he was likely to be disciplined for any action that caused the line to stop. The initial error, whether a bad part or good part
improperly installed, much quickly compounded assembly workers further down the line. Once a defective part had become
embedded in complex vehicle, an enormous amount of rectification work might be needed to fix it. And because the problem would
not be discovered until the very end of the line, a large number of similarly defective vehicles would have been built before the
problem was found. So in striking contrast to the mass production plant where stopping line was the responsibility of the senior line
manager- Ohno placed a cord above every work station and instructed workers to stop the whole assembly line immediately if a
problem emerged that they could not fix.Then the whole team would come over to work on the problem.
Fig 12 : The
Lean Wheel (after Tapping, Luyster et al. 2002) Fig 13 : Improvement of Culture in Lean Manufacturing
Ohno then went much further..In mass production plants, problems tended to be treated as random events. The idea was simply to
repair each error and hope that it did not recur.Ohno instead instituted a system of problem solving is called “ The five why ’s” .
Production workers were taught to trace systematically every error back to its ultimate cause(by asking”Why ”as each layer of
problem was uncovered) than to devise a fix so that it would never occur again. Not surprisingly, as ohno began to experiment with
these ideas, his production line stopped all the time, and the workers easily became discouraged. However as a work teams gained
experience identifying and tracing the problems to their ultimate cause, the number of errors began to drop dramatically.
Today, in T oyota plants where every worker can stop the line, yields approach 100 percent. That is, the line practically never
27
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
stops(“in mass production plants by contrast where no one but the line manger can stop the line, the line stops constantly. This is
not to rectify mistakes they are fixed at the line - but to deal with the material supply and coordination problems. The consequence is
that 90% yield is often taken as a sign of good management).Even more striking was what happened at the end of line. As Ohno’s
system hit its stride, the amount of rework needed before shipment feel continually. Not only that the quality of shipped cars steadily
improved. This was for this simple reason that quality inspection no matter how diligent, simple cannot detect all the defects that can
be assembled in to today ’s complex vehicles. Today, Toyota assembly plants have practically no rework areas and
perform almost no rework.By contrast, as we will show, a number of current-day mass production plants devote 20 percent of
plant area and 25 percent of their total hours of effect to fixing mistakes. Perhaps the greatest testament to Ohno’s ideas lies
in the quality of cars actually delivered to the consumer. American buyers report that Toyota’s vehicles have among the lowest
number of defects of any in the world, comparable to very best of the German luxury car producers who divot many hours of
assembly plant effort to rectification.
Lean production: The Supply Chain : Assembling the major components into a complete vehicle the task of final assembly plant,
accounts for only 15% or so of the total manufacturing process. The bulk of process involves engineering and fabricating more than
10,000 discrete parts and assembling these into perhaps 100 major components engines, transmissions, steering gears, suspensions
and so forth.Co-ordinating this process so that everything comes together at the right time with high quality and low cost has been a
continuing challenge to the final assembler firms in the auto industry. Under mass production, as we noted earlier, thy initial was to
integrate the entire production system into one huge, bureaucratic command structure which orders coming
down from the top. How ever even Alfred Sloan’s managerial innovations was unequal to this tasks.
Figure 14: Lean Policy Deployment matrix (Womack, Jones, 2003)
The worlds mass production assemblers ended up adopting widely varying degrees of formal integration ranging from about 25
% in house production at small specialist firms such as Porsche and Saab to about 70% at General motors. Ford, the early leader
in vertical integration, which actually did approach 100% at the rouge, de-integrated after World War II to about 50%.
However the make are buy decisions that occasioned so much debate in mass production firms struck Ohio and others at Toyota as
largely irrelevant, as they began to consider obtaining components for cars and trucks. The real question how the assemble r and the
suppliers could work smoothly together to reduce cost and improve quality, whatever formal, legal relationship they m ight have.And
here the mass production approach whether to make or buy seemed broadly unsatisfactory. At Ford and GM the central engineering
staffs designed most of the 10,000 plugs parts in vehicles and component system they comprised the firms they gave the drawings to
their suppliers, whether formally part of assembler firm or independent businesses and asked them for bids on a given number of
parts of given quality(usually expressed as maximum number of defective parts per 1000) delivered at a given time among all the
outside firms and internal divisions that were asked to bid, the low bidder got the business.
For certain categories of parts typically those shared by many vehicles (tires, batteries, alternators) are involving some specialized
technology that the assembler firm did not have ( engine computers, for example) independent supplier firms competed to suppl y the
parts, usually by modifying existing standard designs to meet the specifications of a particular vehicle again, success dependent upon
price quality and delivery reliability, and the car makers often switched between firms and relatively short n otice.
In both cases, corporate managers and small-business owners alike understood that it was every firm for itself when sales
declined in the cyclical auto industry. Everyone thought of their business relationships as characteristically short term.
As they growing Toyota firm considered this approach to component supply Ohno and others saw many problems. Supplier
organization working to blue print, had little opportunity or incentive to suggest improvements in the production designed ba sed
28
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
on their own manufacturing experience. Like employees in the mass production assembly plant they were told in the effect to
keep their heads down and continue working. Alternatively suppliers offering standardized designs of their own modify to spec
ific vehicles had no practical way of optimizing this parts because they were given practically no information about the rest of
the vehicle. Assemblers treated this information as proprietary.
Fig 15 : Lean Manufacturing Principles
And they were other difficulties. Organizing suppliers in vertical chains and playing them against each other in search of lo west
short term cost blocked the flow of information horizontally between suppliers, particularly on advances in manufa cturing
techniques. The assembler might ensure that suppliers had low profit margins but not that they steadily decreased the cost of
production through improved organization and process innovations.
Finally there was a problem of co-ordinating flow of parts within the supply system on a day to day basis. The inflexibility tools
in supplier plants(analogous to the inflexibility of stamping presses in the assembler plants) and the erotic nature of orders f
rom assemblers responding to shifting market demand caused suppliers to build large volumes of one type of part before
changing over machinery to the next and to maintain large stocks of finished parts in a warehouse so that the assembler would
never ha ve cost to complaint( or verse, to cancel a contract) because of a delay in delivery. The result was high inventory cost
and the routine production of 1000’s of parts that were later found to be defective when installed at the assembly plants.
To counteract these problems and to respond to a surge in demand in the 1950’s, Toyota began to establish a new, lean
production approach to components supply. The first step was to organize suppliers into functional tiers, whatever the legal
formal relation of the supplier to the assembler. Different responsibilities were assigned to firms in each tier. First – tier
supplier were responsible for working as an on integral part of product development team in developing anew product. Toyota
told the m to develop, for example,a steering, breaking or electrical system that would work in harmony with other systems.
First they were given a performance specification. For example they were told to design a set of brakes that could stop a 2 ,200
pound car from 60 miles per hour in 200 feet 10 times in succession with out fading. The brakes should fit into a space 6 “ x 8” x
10” at the end of each axle & be delivered to the assembly for $ 40 per a set. The suppliers were then told to deliver protot ype
for testing. If the prototype worked they got a production order. Toyota did not specify what the brakes were made of or how
they were to work. These were engineering decisions for the suppliers to make.
Toyota encouraged its first-tier suppliers to talk among themselves about ways to improve the design process. Because each
supplier for the most part specialized in one type of component and did not compete in that respect with other suppliers in the
group, sharing this information was comfortable and mutually beneficial.For example a first-tier supplier might be responsible
for manufacturing alternators. And each alternator has around 100 parts, and the first-tier supplier would obtain all of these
parts from second-tier suppliers.
Because second-tier suppliers were all specialists in manufacturing processes and not competitors in a specific type component,
it was easy to group into supplier associations so that they, to , could exchange information on advances in manufacturing
techniques.
Finally Ohno developed a new way to co-ordinate the flow of parts with in the supply system on a day to day basis, the famous just in
time system, called Kanban at Toyota. Ohno’s was simple to convert a vast group of and parts plants into one large machi ne, like
Henry Ford’s highland park plant, by dictating that parts would only produced at each previous step to supply the immediate
29
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
demand of next step. The mechanism was containers carrying parts to the next step. As each container used up, it was sent bac
k to the previous step and this became the automatic signal to make more partsThe simple idea was enormously difficult to
implement in practice because it eliminated practically all inventories and meant that when one small part of the vast produ
ction system failed, the whole system came to stop. In ohno’s view, this was precisely the power of idea – it removed all safety
nets and focused every member of vast production process on anticipating problems before they became serious enough to
stop every thing.It took Eiji Toyoda and Ohno more than 20 years of relentless effort to fully implement this ful l set of ideas-
including just in time within the Toyota supply chain. In the end they succeed, with extra ordinary consequences for
productivity, product qual ity and responsiveness to changing market demand.
THE FUTURE OF LEAN PRODUCTION
Toyota had fully worked out the principles of the Lean production by the early 1960’s. the other Japanese auto firms adapted
most of them as well, although it took many years. For example Mazda did not fully embrace Ohno’s ideas for running factories
and the supplier system until it encountered a crisis in 1973. when export demand for its fuel hungry Wankel-engined cars
collapsed. The 1st step of the sumitomo group in offering help to Mazda was to insist that the company’s Hiroshima production
complex rapidly remake itself In the image of Toyota city at Nagoya.
What’s more not all firms became equally adept at operating the system.(one hour most important objectives in this volume is
to educate the public to the fact that some Japanese firms are leaner than others & that several of the old-fashioned mass
production firms in the west are rapidly becoming lean as well. Nevertheless, by the 1960’s the Japanese firms on average h ad
gained an enormous advantage over mass-producers elsewhere & were able for a period of 20 years to boost their share of
world motor vehicle production steadily by exporting from their highly focused production complexes in Japan .
This process is enormously exciting. it also produces enomorous tensions. There will be real losers(including some of the
smaller & less accomplished Japanese firms) as well as winners, & the public everywhere tends all too readily to interpret the
contest in simple nationalistic terms-“us” versus “them,” ”our” country versus “theirs”.
CONCLUSION
The implementation effort of lean manufacturing in a bulk apparel manufacturer in Tayota was extensively studied to
understand the impact on the performance and organisational culture. A number of industry specific techniques were used to
evaluate the performance over the period and qualitative analysis was carried out to find the impact on the organisational
culture. The Problems that can face with mass production and overcoming that problems with Lean Manufacturing was clearly
discussed. A well planned approach was taken to smoothly tailor the culture towards a lean one using numerous employee
interacting activities. This new culture has initiated a multiplier effect on improving the performance and maintaining the
introduced le an practices on a regular basis. It is acknowledged that the cultural influence plays a vital role in improving all the
other performance measures. Future Lean studies need in particular to include more detailed descriptions of the investigated
Lean interventions and key mediating factors in order to enable subsequent scientific meta-analyses regarding impact on
different organizational stakeholders.
REFERENCE
1. About the Lean Process, Lean Thinking. (n.d.). Applying Lean Thinking in Different Sectors. Retrieved February 6, 2005, from
www.leanaust.com/about.htm.
2. Womack, J.P. & Jones D.T. (2nd ed.) (2003). Lean Thinking: Banish Waste and Create Wealth in your Corporation. New York: Simon & Schuster.
3. Shook, J.Y. (Eds.) (1998). Bringing the Toyota Production System to the United States: A Personal Perspective. Portland: Productivity Press .
4. Wright M., “What Should Students Learn about Manufacturing?” Technology and Children, Volume 5, Issue 3 (2001), p. 2-3.
5. Murman, et. al, Lean Enterprise Value: Insights from MIT’s Lean Aerospace Initiative, Palgrave, 2002.
6. Womack, J and Jones, D., Lean Thinking , Simon and Schuster, 1996.
7. Dar-El, E. (2000). Human Learning: From Learning Curves to Learning Organisations: Kluwer Academic Publisher.
8. Gati-Wechsler, A. M., & Torres, A. S. (2008), 'The influence of lean concepts on the product innovation process of a Brazilian shoe manufacturer',
PICMET '08 - 2008 Portland International Conference on Management of Engineering&Technology, pp. 1137-1144.
9. Landsbergis, P. A., Cahill, J., & Schnall, P. (1999), 'The impact of lean production and related new systems of work organization on worker health',
Journal of occupational health psychology, vol. 4 no. 2, pp. 108.
10. Forza, C. (1996), 'Work organization in lean production and traditional plants: What are the differences?', International Journal of Operations &
Production Management,vol. 16 no. 2, pp. 42-62.
30
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
PERFORMANCE OF I.C.ENGINE USING
VEGETABLE OILS
B.T. Naik1, J.SandeepKumar2
1Associate professor global institute of engineering & technology
2Assistant professor, Abhinav hi-tech college of engineering, Hyderabad
ABSTRACT
Bio-diesel fuel for diesel engines is produced from vegetable oil or animal fat by the chemical process of etherification. This
paper presents a brief history of diesel engine technology and an overview of biodiesel, including performance characteristics,
economics, and potential demand. The performance and economics of biodiesel are compared with those of petroleum diesel.
The term “biodiesel” means the monoalkyl esters of long chain fatty acids derived from plant or animal matter which meet (A)
the registration requirements for fuels and fuel additives established by the Environmental Protection Agency under section
211 o f the Clean Air Act (42 U.S.C. 7545), and (B) the requirements of the American Society of Testing and Materials D6751.6
There is therefore little reason to see vegetable oil as the primary fuel of the future. On the other hand PVO is a fuel, which
does h ave its benefits, and therefore should be given equal treatment as compared to other CO2 neutral fuels.
31
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
1. INTRODUCTION
The idea of using vegetable oil for fuel has been around as long as the diesel engine. Rudolph Diesel, the inventor of the
engine that bears his name, experimented with fuels ranging from powdered coal to peanut oil. In the early 20th
century, however, diesel engines were adapted to burn petroleum distillate, which was cheap and plentiful. In the late
20th century, however, the cost of petroleum distillate rose, and by the late 1970s there was renewed interest in
biodiesel. Commercial production of biodiesel in the United States began in the 1990s.
The most common sources of oil for biodiesel production in the United States are soybean oil and yellow grease
(primarily, recycled cooking oil from restaurants). Blends of biodiesel and petroleum diesel are designated with the letter
“B,” followed by the volumetric percentage of biodiesel in the blend: B20, the blend most often evaluated, contain s 20
percent biodiesel and 80 percent petroleum diesel; B100 is pure biodiesel. By several important measures biodiesel
blends perform better than petroleum diesel, but its relatively high production costs and the limited availability of some
of the raw ma terials used in its production continue to limit its commercial application. Bio-diesel has gained much
attention in recent years due to the increasing environmental awareness. It is produced from renewable resources and,
more importantly, is a clean burn ing fuel that does not contribute to the net increase of carbon dioxide.
1.1 Pure Vegetable oil fuel characterization
The interest in plant or vegetable oils originated in the late 70's and came from the agrarian sector, which is still one of
its main drivers. Initially, it was believed to be possible to use these oils directly with a low processing level. Extensive
testi ng by the engine industry has shown that unmodified engines, while operating satisfactorily, would quickly develop
durability problems, due to problems with fuel injectors, piston rings and lubrication oil stability. For this reason the
engine must be modified. Such modifications can at present be made by a number of facilities mainly in Germany. More
than 5000 vehicles are presently using pure vegetable oil in Germany. [ELS] Nevertheless one can still find examples of
claims that PVO can be used in any unmodified engine. As an example the TV program Top Gear on BBC presented the
claim in November 2002, but without showing any durability test of the concept.
1.2 The major advantages of natural vegetable oil
High calorific value: high energy density
Liquid in form and thus easily to be
handled When burned it emits less soot
When burned it has high energy efficiency
It is neither harmful nor toxic to humans, animals, soil or water
It is neither flammable nor explosive, and does not release toxic
gases It is easy to store, transport and handle
It does not cause damage if accidentally spilt
Its handling does not require special care to be taken
It is produced directly by nature: it does not have to be
transformed It is a recyclable form of energy
It does not have adverse ecological effects when used
It does not contain sulphur it does not cause acid rain when
used All types of forestry machinery (preservation of ground
water) Lorries, vans, pick-ups, etc. (fuel efficient)
Private cars (no CO2 increase, save, non inflammable fuel)
Mixers, mills, pumps, ventilators, and other stationary industrial and agricultural machinery (no toxic gases or
inflammable liquids)
1.3 Market Position
32
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
PVO today represents a marginal niche in the transport fuel market. The majority of vehicles running on PVO are
converted regular vehicles, and conversion equipment sets are available for many common engine models [VW]. Thus in
theory most diesel engines can be converted to pure PPO operation, including advanced TDI versions, and as such the
technology much be counted as available on a broad basis. One of the main suppliers of conversion equipment (Elsbett
in Germany) also sells an engine specifically designed for PVO operation. Additionally the tractor manufacturer Deutz-
Fahr markets a tractor specifically adapted for PVO operation as part of a market introduction program.
2. VEGETABLE OIL TO BIODIESEL – PROCESS
The process of converting vegetable oil to biodiesel Fuel is called Transesterification. Chemically Transesterification
means taking a triglyceride molecule or a complex fatty acid, neutralizing the free fatty acid, removing the glycerin a nd
creating an alcohol ester. This is accomplished by mixing methanol with sodium hydroxide to make sodium methoxide.
This liquid is then mixed into the vegetable oil. After the mixture has settled, glycerin is left on the bottom and methyl
esters or bodie s is left on top and is washed and filtered. The final product biodiesel Fuel when used directly in a diesel
engine will burn up t o 75% cleaner than mineral oil Diesel fuel.
2.1 Plant oils used for Biodiesel:
A variety of bio-liquids can be used to produce biodiesel. The main plants whose oils have been considered as feed
stocks for bio fuel are soyabean oil, rapeseed oil, palm oil, sunflower oil, safflower oil and jatropha oil. Others in
contention ar e mustard, hemp, castor oil, waste vegetable oil, and in some cases even algae. There is going on research
into finding more suitable crops. A list of oils that appear to have the potential for biodiesel is provided below in
alphabetical order of the plant name:
1) Algae 9) Hemp oil
2) Artichoke oil 10) Jatropha oil
3) Canola oil 11) Jojoba oil
4) Castor oil 12) Karanjit oil
5) Coconut oil 13) Kukui nut oil
6) Corn oil 14) Milk bush shrub
7) Cottonseed oil 15) Mustard oil
8) Flax oil
3. JATROPHA OIL
Considering all options in India, Jatropha Curcas has been identified as the most suitable source. The key features of
Jatropha are:
Low cost seeds
High oil content
Small gestation period
Growth on good and degraded soil
Growth in low and high rainfall areas
Seeds can be harvested in non rainy season
Plant size is making collection of seeds more convenient
Effective yield: The jatropha plant bears fruit from the second year after its plantation and the economic yield stabilizes from
the fourth or fifth year onwards. The plant may live for more than 50 years with an average effective yielding time of 50 years.
The economic yield can be considered as 0.75-2.0 kg/plant and 4.0-6.0 tonnes per hectare per year depending on the agro-
climatic zone and agricultural practices. The cost of plantation has been estimated at Rs.20,000 a hectare inclusive of plant
material, maintenance for one year, training, overheads and the like. A selling price of Jatropha seeds at Rs.12 a k g would be
an economically attractive proposition for farmers. 12 million jobs: India has vast stretches of degraded land, mostly in areas
with adverse agro climatic conditions, where species of Jatropha can be grown easily. Use of 11 million hectare of wasteland
for Jatropha cultivation can lead to generation of a minimum of 12 million jobs. In dia with its huge
33
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
waste/non fertile lands, has taken a well-noted lead in Jatropha cultivation and commercial production is what the
industries have to focus on for sustainable development.
Fig.No:3.1 Plantation of jatropha and Jatropa seeds
4. PALM OIL
The oil palm, Elaeis guineensis, is native to Africa. Its commercial value lies mainly in the oil that can be obtained from
the mesocarp of the fruit - palm oil - and the kernel of the nut - palm kernel oil. Palm oil is used mainly for cooking
(cooking oil, margarine, shortening, etc.) and has non-food applications (soap, detergent, cosmetics, etc.)
Palm Oil is the highest yielding oil crop, producing on average about 4-5 tonnes of oil per hectare per year, about ten
times the yield of soybean oil. It is already very profitable to invest in the industry, even using existing technology. The
price of palm oil is
consequently high - above 24,000 Rs per tonne - and the cost of production relatively low - about 7,200-9,600 Rs per
tonne –so investors do not see the need for R&D. There may also be reluctance to embark on R&D, since its results often
filter down to end-users eventually, inducing the latter to wait for others to cover the costs.
Fig.No:4.1 Plantation of palm and palm seeds
5. SUMMARY
Two identical diesel engines are running, one on diesel oil and the other on palm oil. It possible by this method to have a
continuous comparison of the operation of the two engines thanks to a series of regular measurements. This analysis
focuses on the evaluation of the durability of the engine running on palm oil.
5.1 Testing Conditions
Description of the engine used:
Copy of LISTER engine 8/1 manufactured in India and marketed by the company FLAMINGO OVERSEAS PVT. LTD,
RAJKOT - 360.001, GUJARAT– INDIA
Table No: 5.1 durability of the engine running on palm oil.
34
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
5.2 Appreciation of the service life of the palm oil engine
Evolution of the quantity of oil necessary to provide 1 mechanical kWh:
From the mass consumption of the engine per unit of time and the measure of the electric output by the generator, we
deduce the quantity of oil necessary for the mechanical production of one kWh. The evolution in time of this value
compared with the engine running on diesel oil will enable us to appreciate the service life of the engine running on
palm oil.
Wear of the injection elements:
The acidity or the lack of viscosity of the palm oil is likely to cause a premature wear of certain parts of the injection pu
mp or injector. The disassembling and the comparison of these elements on the engines A and B will allow assessing if
the use of palm oil as a fuel really is a problem at this level.
Quality of the combustion in the engine:
The quality of combustion is appreciated mainly by the opacity of the exhaust fumes. They must be colorless. The
temperature of these fumes also gives interesting information. Then, the disassembling of the cylinder head and the
injector on the two engines and the comparison of their carbon deposit makes it possible to figure out if palm oil causes
any problems at the combustion level.
Wear of the engine:
The ovalisation of the cylinder is the most important information. By measuring it before and after the tests and by
comparing the values measured on the engines A and B, one can determine if the use of palm oil produces a faster wear
in the engine or not. Then, the analyzes of lubricating oil samples taken in the two engines at the end of the tests will
allow to complete this comparison.
6. CONCLUSIONS
1. The biodiesel fuels produced less smoke than diesel under similar engine operating conditions, probably
because palm oil contains oxygen which helps the combustion in the cylinder.
2. The biodiesel and reference fuels provided similar combustion pressure patterns at low and medium engine
loads, suggesting that the biodiesels had no adverse effect in terms of knocking.
3. The biodiesel fuels lowered the premixed combustion of heat release because of the lower volatility.
35
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
7. Disadvantages:
1) The viscosity of vegetable oils is much higher than that of diesel.It can cause problems in fuel handeling,
pumping, automization& fuel jet penetration.
2) This would require modifications in the engine fuel system.
3) Vegetable oils are slower burning.
4) It can give rise to exhaust smoke, fuel impingement on cylinder walls& lubricating oil contamination.
5) To overcome this combustion system must be modified to speed up air-fuel mixing.
6) The indirect injection (IDI) engines are more suitable than direct injection(DI) engines for vegetable oils
because of single relative large size nozzle hole.
8. REFERENCES
1. Azhar Abdul Aziz*; Mohd Farid Said* and Mohamad Afiq Awang* “Performance of Palm Oil-Based Biodiesel Fuels in a Single Cylinder
Direct Injection Engine”
2. http://www.plantoils.in/
3. [email protected]
4. http://www.in.gov/energy/pdfs/2006%20May%20Biodiesel%20Fact%20Sheet.pf.
5. www.resourceinvestor.com/.../Energy/Monty2.png
6. www.biodiesel.lorg.com
7. www.card.iastate.edu/.../images/4-1_small.gif
8. www.ambientediritto.it/.../img25.jpg
9. www.jatrophaworld.org/9.html
36
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 – 24th and 25th Febuary 2014.
Recent Developments in HCCI Engine Technology
with Alternative fuels.
P.V.Ramana,Dr.D.Maheshwar,Dr.B.Uma Maheswar Gowd
Assoc.Professor Abhinav Hi-Tech College of Engineering Hyderabad
Principal,MNRCollege ofEngg &Technology,,SangaReddy
Professor,JNTU College of Engineering,Anantapur
ABSTRACT
When gas prices rise, people's thoughts naturally jump to alternative fuel sources. Stringent emissions standards that are being
enforced in many countries around the world are encouraging vehicle manufacturers to meet such emissions norms as EURO 5,
the ACEA agreement and EURO 6.Due to the stringent emission norms, the research in the field of internal combustion engines
in general and diesel engines in particular gathered huge importance and also increasing demand on fuel consumption. So high
demands are placed on large gas engines in the areas of performance, fuel consumption and emissions. One way to reach this
goal is to using new combustion concepts, such as Homogeneous Charge Compression Ignition. Homogeneous Charge
Compression Ignition (HCCI) engines promise high thermal efficiency combined with low levels of nitric oxide and particulate
matter emissions. However, due to the absence of an immediate means of triggering ignition, stable operation over a wide
range of conditions an d transient control have proven most challenging and have so far prevented commercialization by
opening up new technical avenues, such as micro-hybridization and bio fuels. Most alternative fuel conversions involve
reconfiguring a gasoline or diesel vehicle or engine to operate on natural gas, propane, alcohols, or on a blend of conventional
and alternative fuels. Use of c lean alternative fuels opens new fuel supply choices and can help consumers address concerns
about fuel costs, energy security, an d emissions. HCCI engines can operate on gasoline, diesel fuel, and most alternative fuels.
HCCI combustion is achieved by controlling the temperature, pressure, and composition of the fuel and air mixture so that it
spontaneously ignites in the engine. This control system is fundamentally more challenging than using a spark plug or fuel
injector to determine ignition timing a s used in SI and DI engines, respectively. The purpose of this study is to summaries the
alternative fuel effect on the HCCI engine combustion
Keywords: HCCI, combustion, fuel efficiency, pollutant emission, alternate fuels
37
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
INTRODUCTION
2. ALTERNATIVE FUELS
Alternative fuels are derived from resources other than petroleum. Some are produced domestically, reducing our
dependence on imported oil, and some are derived from renewable sources. Often, they produce less pollution than
gasoline or diesel.
2.1Biodiesel: An alternative fuel formulated exclusively for diesel engines, biodiesel is made from vegetable oil or animal
fats. Biodiesel is to petroleum diesel fuel what ethanol (E85) is to gasoline: a substitute fuel made from biomass, which
means that it is inherently renewable and, in itself, it contributes nothing to carbon-dioxide loading of the atmosphere.
Biodiesel commonly uses soybean or canola oil as its base, but animal fat or recycled cooking oil can also be used. To
speed its market introduction, and dilute its additional cost over petroleum diesel fuel, the initial commercial product
being studied is a blend of 20% biodiesel and 80% petroleum diesel fuel, whence B20. As noted above, B20 can be stored
and dispensed in exactly the same manner as petroleum diesel fuel; in addition, diesel -powered vehicles require no
modification at all to run on B20 or even higher blends. Thus any diesel-powered truck or bus is, potentially, already an
alternative-fueled vehicle! For example, an ordinary used Winnebago was "converted" into the Veggie Van simply by
pouring homemade biodiesel into its tank. Since biodiesel is not a fossil fuel, as noted above, it can cu t greenhouse-gas
emissions as well as ordinary pollutants (particularly soot) by displacing petroleum diesel fuel.
2.2 Biomass: plant-derived material is what it’s all about—and harnessing all that energy captured by photosynthesis. Biomass
is a renewable energy resource derived from the carbonaceous waste of various human and natural activities. It is derived from
numerous sources, including the by-products from the timber industry, agricultural crops, raw material from the forest, major
parts of household waste and wood. At present, biogas technology provides an alternative source of energy in rural India for
cooking. It is particularly useful for village households that have their own cattle. Through a sim ple process cattle dung is used
to produce a gas, which serves as fuel for cooking. The residual dung is used as manure
2.3 Blends: Blends are mixtures of traditional and alternative fuels in varying percentages, like biodiesel blends of B5 or
B20. Blends can be thought of as transitional fuels since they work with current technologies while paving the way for
future integration. Blending amounts of alternative fuel with conventional fuel is an important option for reducing
petroleum consumption. Examples of low-level fuel blends include E10 (10% ethanol/90% gasoline), B5 (5%
biodiesel/95% diesel), and B2 (2% biodiesel/98% diesel). Blends can also consist of two types of alternative fuels, such as
hydrogen and compressed natural gas (HCNG), which can be a combination of 20% hydrogen/80% CNG. B20 (20%
biodiesel/80% diesel) and E85 (85% ethanol/15% gasoline) are not considered low level blends. These are as:
1.Ethanol Blends 2.Low-Level Biodiesel Blends 3.Biodiesel (B20 and above) 4.Hydrogen/Natural Gas Fuel Blends.
2.4 Ethanol: Ethanol (ethyl alcohol) is an alternative fuel made from corn, grains or agricultural waste and is used
primarily as a supplement to gasoline. Ethanol, or grain alcohol, is produced by fermenting biomass, commonly corn
(though other, lower-value feedstocks have been tested in an effort to reduce costs, like brewery waste and cheese-
factory effluent-- blecch!). It is thus inherently a renewable resource, and contributes nothing in itself to greenhouse-gas
loading of the atmosphere (and with efficient modern farming techniques, there's still an improvement even when you
add in the petroleum-based fuel burned to plow the fields, make the fertilizer, etc.). As an alternative motor vehicle fuel,
it i s usually blended in a mixture of 85% ethanol, 15% unleaded gasoline, whence E85. (It is also used in up to 10%
blends with gasoline (gasohol) to oxygenate the gasoline, and this mixture can be used by most modern gasoline
vehicles.) Ethanol, as noted above, is a renewable resource that contributes nothing in itself to global warming concerns.
Like methanol, it can be blended with any amount of gasoline in the tank of a flex-fuel vehicle, which is what automakers
are selling these days. In fact, starting with the 1999 model year, some automakers are making every one of certain
vehicle models capable of using E85 in any mixture with gasoline, at no extra charge.
2.5 Hydrogen: Hydrogen is an elemental gas that is extracted from other compounds, not manufactured in the traditional
sense like other fuels. Hydrogen does not occur free in nature; it can be made by "re-forming" natural gas or another fossil fuel,
or by using electricity to split ("electrolyze") water into its components of oxygen and hydrogen. In this sense, hydrogen is like
electricity: the energy to generate it can be obtained from sources ranging from the burning of high-sulfur
38
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
coal to pollution-free photovoltaic cells (solar cells). Hydrogen has been called the "most alternative" of the alternative
fuels: if it is made by electrolysis of water using electricity from a nonpolluting source like wind or solar power, then no
pollutants of any kind are generated by burning it in an internal combustion engine except for trace amounts of nitrogen
oxides, and if it is used in a fuel cell then even these disappear. Furthermore, no greenhouse gases are generated
because there's no carbon in the fuel. All that comes out the vehicle's exhaust is drinkable water! Using hydrogen as the
"battery" to store energy from a nonpolluting, renewable source would result in a truly unlimited supply of clean fuel.
The advantage of using hydrogen to store energy rather than a battery pack is that a hydrogen tank can be refilled in
minutes rather than recharged in hours, and it takes less space and weight to store enough hydrogen to drive a given
distance on a single refueling than it does to carry enough battery capacity to go the same distance on a single
recharging. The battery -electric drivetrain uses energy more efficiently, and can handle the vast majority of daily
commute-and-errands driving that people do, but for long trips hydrogen could prove to be a lot more convenient.
2.6 Methanol: Methanol (methyl alcohol) is an alternative fuel made from woody plant fiber, coal or natural gas and is
used primarily as a supplement to gasoline. Methanol is typically made from natural gas; though it is possible to produce
it by fermenting biomass (this is why it is sometimes called "wood alcohol"), this is not economically competitive yet.
Because it is easier to transport natural gas to a distant market by converting it to methanol, which is a liq uid at ordinary
temperatures and pressures, than by chilling and liquefying it or by building a long pipeline, some petroleum -exporting
countries are looking at exporting their "waste" natural gas (which they currently "flare off" in huge flames visible fr om
the Space Shuttle!) by converting it to methanol; however, most of the natural gas that goes into methanol in the United
States is still domestically produced. For reasons to be explained below, most fuel methanol in this country is sold as a
blend of 85% methanol with 15% unleaded premium gasoline, whence "M85". In the not-too-distant future, "neat"
(100%) methanol may be the preferred means of storing hydrogen for fuel-cell electric vehicles.
Alcohol fuels like M85 are perhaps the most "transparent" alternative fuels to the user, i.e., they are the least distinguishable
from gasoline in how you buy and use them, which should ease acceptance. The fuel system of a car or truck only needs to be
slightly changed (somewhat different materials, bigger fuel injectors, and a fuel composition sensor) in order for it to run on
M85, and recently automakers have been offering M85 vehicles at no extra cost over their gasoline counterparts (or even for
slightly less money), though at present automakers seem to be more interested in ethanol (E85).
2.7 Natural Gas: Natural gas can be used as a motor fuel in two forms: compressed natural gas (CNG) and liquefied
natural gas (LNG). Compressed natural gas is like liquefied petroleum gas (LPG) in many ways, only more so. It is very
easy on the engine, giving longer service life and lower maintenance costs. CNG is the least expensive alternative fuel
(except electricity) when you compare equal amounts of fuel energy, and, in my experience at least, its price has been
relatively steady. A gasoline-gallon-equivalent of 130-octane natural gas as would have paid for a gallon of 92-octane unleaded
gasoline! Even with the natural-gas price spikes of the last few years, have found the price of CNG to be les s volatile, and on
average lower, than that of gasoline.
2.8Propane: Most familiar with it as a fuel for gas barbecue grill and home appliances, propane is also known as
liquefied petroleum gas (LPG) and is a by-product of natural gas and crude oil refining.
Liquefied petroleum gas, as the name suggests, is partly a byproduct of petroleum refining. It consists of hydrocarbons
that are vapors, rather than liquids, at normal temperatures and pressures, but which turn liquid at moderate pressures;
its main constituent is propane, and it is sometimes referred to by that name.
Because it's so widely available, LPG is the least "alternative" of alternative fuels if "alternative" equates to inconvenien
ce, and most of the alternative fuel used in the United States is LPG. (One might also say, given LPG's dominance of the
alternative-fuel market, that it's the most alternative fuel...) In order to liquefy the fuel, it is stored in sturdy tanks at
about 20 times atmospheric pressure; since these are much tougher than typical sheet-metal or plastic gasoline tanks,
and since they have a built-in shutoff valve to seal the tank if the fuel lines start leaking, LPG is safer than gasoline. (The
tanks are a permanent part of the vehicle, unlike barbecue-grill tanks, so they are immune to the usual cause of LPG
fires, which is leakage due to the operator's failure to hook the tank up properly.) It is also somewhat cheaper than
gasoline in most places at most times, when you compare the price of a gallon of gasoline with the price of the
somewhat larger volume of LPG needed to drive the same distance.
39
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
Because LPG enters the engine as a vapor, it doesn't wash oil off cylinder walls or dilute the oil when the engine is cold, a
nd it also doesn't put carbon particles and sulfuric acid into the oil. Thus an engine that runs on propane can expect a
longer service life and reduced maintenance costs. (Incoming liquid gasoline cools the combustion chamber and a valve
as it vaporizes, so you might expect, for example, that you'd need a valve job more often on an LPG-burning engine
because the gaseous fuel doesn't give this cooling effect. However, modern valve and valve-seat materials, designed for
unleaded gasoline, don't have problems with the "dry" fuel. More recently, direct injection of LPG in the liquid state,
with attendant cooling effect as well as improved emissions control, is being tested.) Its high octane rating (around 105)
means that power output and/or fuel efficiency can be increased, without causing detonation ("knocking"), in a vehicle
that isn't required to run on gasoline as well.
2.9 P-series Fuel :Although they are not yet widely used or manufactured, P-series fuels were added to the list of
Energy Policy Act (EPAct) recognized alternative fuels in 1999.
3. HCCI IS A PROMISING TECHNOLOGY
Considering the type of engine; gasoline engine could operate cleaner than diesel engine, however diesel engine shows
higher in thermal efficiency. This inspires the idea of hybrid among two common type of engine so far. It calls “HCCI”
concept i.e. Homogeneous Charge Compression Ignition. However, HCCI combustion works with gasoline diesel and
most alternative fuels, giving it a major advantage for future developments.
In HCCI engines, the fuel and air are premixed to form a homogeneous mixture before the compression stroke.
As a result, the mixture ignites throughout the bulk without discernable flame propagation due to occurrence of auto
ignition at various locations in the combustion chamber (multi-point ignition). This may cause extremely high rates of
heat release, and consequently, high rates of pressurization [3-5]. In HCCI engines, auto-ignition and combustion rate are
mainly controlled by the fuel chemical kinetics, which is extremely sensitive to the charge composition and to the p
ressure and temperature evolution during the compression stroke, therefore HCCI combustion is widely assumed to be
kinetically controlled [3, 6, 7]. The main objective of HCCI combustion is to reduce soot and NOX emissions while
maintaining high fuel efficiency at part load conditions [2, 8]. In some regards, HCCI combustion combines the
advantages of both spark ignition (SI) engines and compression ignition (CI) engines [8, 9]. The results from experiment
and simulation show that the HCCI combustion has a low temperature heat release and a high temperature heat release,
and both heat releases occur within certain temperature ranges. The low temperature heat release is one of the most
important phenomena for HCCI engine operation and the occurrence of it depends chemically on the fuel type [10-12].
However there are certain number of obstacles and problems in its application that have not been resolved. These
problems are the control of ignition and combustion, difficulty in operation at higher loads, higher rate of heat release,
higher CO and HC emissions particularly at light loads, difficulty with cold start, increased NOX emissions at high loads
and formation of a completely homogeneous mixture [13-15]. The lack of a well-defined ignition timing control has led a
range of control strategies to be explored. Numerous studies have been conducted to investigate HCCI combustion
control methods such as intake air preheating [14, 16, 17], Variable Valve Actuation (VVA) [4], Variable Valve Timing
(VVT) [1], Variable Compression Ratio (VCR) [18] and EGR rate [10]. Moreover many studies also focused on the effects of
different fuel physical and chemical properties to gain control of HCCI combustion [9, 19- 21].
HCCI engines can be considered as newcomers even though the research was initially by Onishiet al. in 1979, as
reported in [22]. Investigators worldwide are developing HCCI engines as this technology has not matured sufficiently.
They can be used in either SI or CI engine configurations with a high compression ratio (CR). HCCI engines work without
the help of diesel injectors or spark plugs and can achieve high engine efficiency with low emission levels. General
Motors Corporation (GM) has unveiled a prototype car with a gasoline HCCI engine and it was claimed that it could cut
fuel consumption by 15% [23]. The engine is able to virtually eliminate NOx emissions and lowers throttling losses which
assists better fuel economy [24].
A great deal of work has been done in recent years and the research area has extended to all aspect of the
combustion process. It has been gradually presenting a picture of energy saving and cleaner exhaust emissions.
Increasing environmental concerns regarding the use of fossil fuels and global warming have prompted researchers to
invest igate alternative fuels.
HCCI has high fuel flexibility and can be applied for a wide range of fuels with different octane/cetane numbers. The
combustion process of a HCCI engine has little sensitivity to fuel characteristics such as lubricate and laminar flame speed.
Fuels with any Octane or Cetane number can be burned, although the operating conditions must be adjusted to accommodate
different fuels, which can impact efficiency. An HCCI engine with variable compression ratio or variable valve timing could, in
principle, operate on any hydrocarbon or alcohol liquid fuel, as long as the fuel is vaporized and mixed with air before ignition.
Besides gasoline[25] and diesel fuel [26], a variety of alternative fuels, such as methanol [27], ethano l
40
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
[28,29], natural gas [30], biogas [31], hydrogen [32], DME [27] and their mixtures [33-35], including also gasoline and
diesel mixtures and different mixtures of iso-octane with heptane [36], have been experimentally proved as possible
fuels for HCCI combustion in both two-stroke and four-stroke engines.
4. ALTERNATIVE FUELS FOR HCCI ENGINE – AN ANALYSIS
Extensive experimental research shows that the engine exhaust emissions and fuel efficiency of modern diesel engines
indicate several unfavorable conditions for biodiesel fuels when the engines are operated in conventional high
temperature combustion cycles. The homogeneous charge compression ignition (HCCI) is an alternative combustion
concept for internal combustion engines. The HCCI combustion engine offers significant benefits in terms of high thermal
efficiency and ultra low emissions (NOx and PM).Fuels can be described by various combinations of chemistry, boiling
points, or physical Properties. The significance of kinetics in modelling advanced combustion modes li ke HCCI has been
well-recognized. Overall, HCCI engine generally responded well to fuels of lower octane, higher sensitivity, lower aromatics
and higher olefins, with boiling points in the lower range of those evaluated. One of the advantages of HCCI combustion is
its intrinsic fuel flexibility. HCCI combustion has little sensitivity to fuel characteristics such as lubricate and laminar flame
speed. Fuels with any octane or cetane number can be burned, although the operating conditions must be adjusted to
accommodate different fuels, which can impact efficiency. The study focuses on to investigate the effect of different fuels
used in HCCI on combustion characteristic. In order to study the fuel effect, a comparative study [37] was carried out with
four types of fuel combinations to control the combustion process of HCCI engine. The fuels used were Gasoline (A-92, A-
95, A-98), Diesel fuel (Diesel-45, Diesel-50, Diesel-55), Natural-Gas (NG), single-and dual-component mixtures of the
gasoline and diesel primary reference fuels (iso-octane and n-heptane). Combinations between these fuels were used, such
as: Natural-Gas with DME (Dimethyl Ether), gasoline with DME, diesel fuel or paraffin hydrocarbons with Natural-Gas.
4.1 Effect of using different fuels combinations on the performance of HCCI
The fuel mixture combinations are:
Table 1. Properties of Fuels
Homogeneous mixtures of two different fuels, which have different ignition characteristics, were used in a compression
ignition engine to control the ignition and to improve the thermal efficiency. By varying the composition of the fuel
mixture, the ignition timing can be controlled as shown in the following Figures.
41
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
Fig 1 : Relation between CN and λ at Ta = 320 K, 17ε.7 for=maximum brake thermal efficiency (BTE).
total
The above Fig. shows that the Cetane Number of the mixture increase with increasing the total excess air ratio of Natural
- Gas and DME, therefore, by controlling the fuel cetane number, we can control the combustion process of HCCI engine,
and this will happen by using a combination of two different fuels. The combustion process of a homogeneous charge
compression ignition engine is very sensitive to a substantial influence of a fuel cetane number on cycle indication
parameters. A method for controlling physical and chemical composition of a fuel (a usage of a mixed two-component
fuel with a component fraction changed in accordance with a known relationship, for example in dependence on mode
parameters of an engine) was chosen as a basic operation method for a working process of HCCI engine.
Natural-Gas (NG) with Dimethyl Ether (DME)
Fig 2 : Relation between total excess air ratio (λtotal ) and excess air ratio of NG + DME
A-98 with DME
Fig 3 Relation between total excess air ratio (λtotal) and excess air ratio of NG + DME & A-98 + DME CnH2n+2 with DME ,
*n = 1…4+
Fig 4 Relation between total excess air ratio (λtotal) and excess air ratio of NG + DME & CnH2n+2 + DME Diesel-45 with
Natural-Gas
42
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
Fig 5 Relation between total excess air ratio (λtotal) and excess air ratio of NG + DME & Diesel-45 with Natural-Gas
Diesel-55 with Natural-Gas
Fig 6 Relation between total excess air ratio ( total λ) and excess air ratio of NG + DME & Diesel-55 with Natural-Gas C n
H2n+2 with Natural-Gas, *n = 5…10+
Figure 7 Relation between total excess air ratio (λtotal ) and excess air ratio of NG + DME & C n H2n+2 with Natural-Gas
As shown from the above Fig. that HCCI has been achieved with multiple fuels. Fuels with any Octane or Cetane number
can be burned. It was shown that a tested engine can be used for realizing HCCI process with different components of a
mixed fuel (a natural gas, a di methyl ether, benzenes with a different octane number, a diesel fuel, individual
hydrocarbons providing range for changing Cetane number of a mixed fuel in limits of 24 - 31) and regulation
relationships for these components providing maximum efficiency were used to improve the combustion behaviour,
control the ignition timing and improve the thermal efficiency of the HCCI engine. Results analyzed that the possibilities
of using fuels with different physical and chemical properties in HCCI engines to control the ignition timing and to
control the combustion process.
Homogeneous mixtures of two different fuels like Natural-Gas with Dimethyl Ether (DME), A-98 with DME, CnH2n+2 with
DME, Diesel-45 with Natural-Gas and others, which have different ignition characteristics, are used in a compression
ignition engine to control the ignition and to improve the thermal efficiency. By varying the composition of the fuel
mixture , the ignition timing can be controlled
HCCI engines can operate using any type of fuel as long as the fuel can be vaporized and mixed with air before
ignition [38]. Since HCCI engines are fully controlled by chemical kinetics, it is important to look at the fuel’s auto ignit
ion point to produce smooth engine operation. Different fuels will have different auto-ignition points . Fig.7 shows the
initial intake temperature required for the fuel to auto ignites when operating in HCCI mode. It is clearly seen that
methane requires a high intake temperature and high compression ratio to auto-ignite, as does natural gas because its
main component (typically in a range of 75%-95%) is methane.
It is easily adapted for use as a fuel due to its wide availability, economic and environmental benefits [39]. Its
high auto-ignition point gives it a significant advantage over diesel-natural gas operation by maintaining the high CR of a
diesel engine and lowering emissions at the same time[39-41]. It was found that methane is suitable for high CR engine
operations[40] and results from a four stroke HCCI engine simulation have shown that meth ane did not ignite if the
intake temperature was less than 400K with CR=15 [42]. where methane will only auto-ignite with intake temperature
less than 400K when CR>18.
43
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
If the Indicated Mean Effective Pressure (IMEP) is increased, it can reduce the intake temperature required on a HCCI
engine. Increasing the CR has the same effect [43]. However, the intake temperature required for hydrogen is lower than that
for natural gas in HCCI engines without increasing the IMEP or the CR [44]. This is due to hydrogen h aving a lower density than
natural gas. Hydrogen can operate as a single fuel in a HCCI engine but it works in an unstable condition and is prone to
generate knocking [45]. It has the highest diffusivity in air, about 3-8 times faster, which leads to fast mixing [41] and the intake
charge can be considered homogeneous when mixed with air[47]. Its net heating value is almost 3 times higher than diesel
(119.93 MJ/kg compared to 42.5 MJ/kg) with a high self-ignition temperature to initiate combustion (858 K) [45]. Hydrogen and
natural gas are mainly used as fuel additives or even as a single fuel in IC engines due to their practicality and availability. Car
manufacturers are producing cars powered by fuel-cells(using hydrogen), as well as engines operated with compressed natural
gas (CNG). They are purposely built to reduce emissions and be more economical than gasoline and diesel. Iso-octane is used
as a surrogate fuel for gasoline in engine experiments while n-heptane is used for diesel[46]. Hydrogen and natural gas are
mainly used as fuel additives or even as a single fuel in IC engines due to their practicality and availability. Car manufacturers
are producing cars powered by fuel-cells(using hydrogen), as well as engines operated with compressed natural gas (CNG).
They are purposely built to reduce emissions and be more economical than gasoline and diesel. Iso-octane is used as a
surrogate fuel for gasoline in engine experiments while n-heptane is used for diesel[47]. Alcohol-derived fuels, as shown in Fig.
12, are not widely used due to their complexity to produce.
HCCI engines have some features of both spark ignited (SI) engines and Diesel engines. HCCI engines, like SI
engines, are generally premixed and very lean at fuel-air equivalence ratio, <0.5; thus they produce very low NOx and
particulate matter (PM) emissions. Yet, HCCI engines typically have high compression ratios, leading to high efficiency
similar to that found in Diesel engines. In an SI engine, the combustion event is initiated by a spark, a nd the timing of
the spark is routinely adjusted by an onboard computer called an electronic control unit (ECU). Similarly, the combustion
event in a Diesel engine is initiated by injection of the Diesel fuel and the injection time and duration is variable .
However, the HCCI engine does not have a spark plug or direct fuel injection; the combustion event occurs when the
cylinder contents are hot enough (approximately 1000-1200 K) for a long enough time (order ~ 1 millisecond).
The effect of biodiesel content on HCCI engine performance and emission characterization has been studied where
combustion experiments are performed in a two cylinder engine, in which one cylinder operates in HCCI mode while other
operates in a conventional diesel engine cycle. The basic requirement of the HCCI engines of homogeneous mixture of fuel and
air is fulfilled by port fuel injection strategy, in which an external mixing device is used for fuel vaporization. This fuel vaporizer
provides highly premixed charge of fuel and air. HCCI engine is operated with various blends of biodiesel (B20, B40, B60 and
B80) and 100% biodiesel (B100). Experimental results of engine tests included combustion and exhaust composition at different
engine load and speed conditions. A partial flow dilution tunnel is used for particulate sampling, which are further analyzed for
various metal concentrations in biodiesel HCCI particulates vis-à-vis diesel HCCI particulates.
5. RECENT TRENDS IN HCCI TECHNOLOGY
The biggest challenge of HCCI in gasoline engines is controlling the combustion process. With spark ignition, the timing of
the combustion can be easily adjusted by the power train control module, with control of the spark event. That is not possible
with HCCI's flameless combustion.[48] The mixture composition and temperature must be changed in a complex and timely
manner to achieve comparable performance of spark-ignition engines in the wide range of operating conditions. That includes
extreme temperatures--both hot and cold--as well as the thin-air effect of high-altitude driving. To overcome this, designers
could use an engine that uses gasoline but switches between spark ignition and diesel -style compression ignition when
required, reports (subscription required).If successful, Bosch estimates that a gasoline engine with HCCI and existing
technologies such as turbo charging and stop-start systems could be up to 30 percent more fuel efficient than a conventional
engine of the same performance. Its first prototype engine will be a 2.0-liter GM Ecotec unit fitted with a supercharger, a
turbocharger, direct fuel injection, a stop-start system, variable valve timing, and HCCI compatibility. Researchers hope their
prototype will be as powerful as GM’s 3.6-liter V-6 but with the 30 percent target for a reduction in fuel consumption.
Unfortunately, the technology still appears to be in its early days as the prototype engine won't be completed until sometime
in 2014, meaning any commercial release may not appear until closer to the end of the decade.
The Volkswagen Golf 2012 satisfies the lust for performance and stability of the drive with its inbuilt technology that’s
an appropriate remedy for all types of engine wants[49]. the ability to the two012 Golf Volkswagen is delivered by a 2.0
litre turbocharged four-cylinder that’s capable of deployed a horsepower of 266. Emptor will are confronted with the
selection of either choosing a conventional six-speed manual transmission and VW’s DSG dual-clutch automated manual.
the quality all wheel drive permits for a sure-footed handling and thus ensures a snug drive. However, the all wheel drive
44
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
has been reworked for livelier response and to form all the ability go rearward straightforward. With a sharper handling
and quicker acceleration, the Volkswagen Golf 2012 could be a much-desired machine that’s positively price
investment.The 2012 Golf Volkswagen also will expertise the HCCI engine. HCCI engine could be a petrol engine that
behaves a lot of sort of a diesel engine. The behavior of the engine depends on compression primarily needed for bound
conditions. The 7-speed DSG can mark the prevalence of the Golf Volkswagen 2012 engine. The automotive has 3
wheelbases, namely, short for the hatchbacks, midsize for Jetta, Tiguan and Golf Variant and Long for Touran and Tiguan
XL. The capability to carry baggage is additionally probably to travel up accordingly. The 2012 Golf can have a baggage
capability of 405 litres.
Surprisingly, Mazda is passing on today’s popular trend of downsized, turbocharged engines—say, a 1.4-liter turbo
instead of this 2.0-liter[50]. The company says the next generation of gasoline engines, which will employ HCCI
(Homogenous Charge Compression Ignition)—essentially firing a gasoline engine like a diesel, without using the spark
plugs—will erode the benefits of downsized engines. Smaller engines reduce pumping losses by operating at a higher
load (the throttle is open further) more often. In the same way, HCCI engines will have to flow more air to realize the
fuel-saving, lean-combustion benefits of that cycle. Mazda claims that if it downsized the Sky family of engines they
wouldn’t be able to flow enough air for HCCI without upsizing once again. Plus, as Mazda rightly points out, adding a
turbocharger and an intercooler is quite a pricey proposition.
6. SUMMARY & FUTURE DIRECTION
It is expected that the vehicle density will increase significantly in coming future, therefore more strict emission regulations
has to come. It is very important to make the compulsory use of the control techniques in the vehicles to meet the emission
standards. The future study to be focused be on the development of alternative diesel emission control techniques based on
the future Indian emission standards. Many challenges remain before HCCI engines are practical. One of the major challenges
of HCCI is controlling the combustion timing. Combustion timing is defined as the crank angle at which 50% heat release occurs,
often called CA50. Each stroke of the piston in the cylinder occurs over 360 crank angle degrees. The point of peak compression
is called top dead centre (TDC) and engine timings are referred to in times of degrees after top dead centre (ATDC). Another
issue for HCCI engines is that pressure rise occurs very rapidly, because auto ignition occurs nearly simultaneously throughout
the combustion chamber. This rapid pressure rise can lead to noise, and potentially damaging knocking conditions within the
engine. By avoiding the detrimental effects of rapid pressure rise, an increase in the power output of HCCI engines can be
achieved. Better understanding of the HCCI combustion process can be greatly aided by exploration of the chemical processes
occurring in the combustion process, such as the effect of fuel structure on combustion timing. It is possible to observe
combustion characteristics of the fuel -in-air charge by collecting exhaust samples at different combustion timing. Combustion
timing is determined by a number of different parameters, such as equivalence ratio, intake manifold pressure, and intake
manifold temperature. The primary influence on combustion timing is the intake manifold temperature of the fuel-in-air
mixture inducted into the engine combustion chamber. HCCI research has continued over the past 15 years. Experiments have
been conducted in four -stroke engines operating on fuels as diverse as gasoline, diesel, methanol, ethanol, LPG, natural gas,
etc. with and without fuel additives, such as iso-propyl nitrate, dimethyl ether (DME), di-tertiary butyl peroxide (DTBT) etc..
From these investigations and many others in the past five years it appears that the key to implementing HCCI is to control the
charge auto ignition behaviour which is driven by the combustion chemistry. Even more than in IC engines, compression ratio is
a critical parameter for HCCI engines. Using high octane fuels, the higher the compression ratio the better in order to ignite the
mixture at idle or near-idle conditions. However, compression ratios beyond 12 are likely to produce severe knock problems for
the richer mixtures used at high load conditions. It seems that the best compromise is to select the highest possible CR to
obtain satisfactory full load performance. The choice of optimum compression ratio is not clear; and it may have to be tailored
to the fuel and other techniques used for HCCI control. An HCCI engine with VCR or VVT could, in principle, operate on any
hydrocarbon or alcohol liquid fuel, as long as the fuel is vaporized and mixed with the air before ignition. The importance of
turbulence/chemistry interactions on the global ignition event and emissions in HCCI engines has been demonstrated using
multidimensional simulations. For lean, low-temperature operating conditions, engine-out NOx levels are low, NOx pathways
other than thermal NO are dominant, engine-out NO 2/NO ratios are high, and in-cylinder in homogeneity and unmixed unless
must be considered for accurate emissions predictions. Combustion timing is determined by a number of different parameters,
such as equivalence ratio, intake manifold pressure, and intake manifold temperature. The primary influence on combustion
timing is the intake manifold temperature of the fuel-in-air mixture inducted into the engine combustion chamber. Devices
such as electrical heaters, heat exchangers, and exhaust gas recirculation (EGR) control the
45
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
intake manifold temperature(Tin ).The composition of the fuel also plays a major part in the ignition process, as different
fuels possess different auto ignition characteristics. Yet still the study should focus more on the technical feasibility of
burning blends of natural gas, hydrogen, and DME to improve engine performance, efficiency, and emissions (NOx, CO,
CO2, THC, and PM are of interest). Cold-start of the HCCI engine will be handled with the micro-pilot F-T synthetic diesel
fuel injection. All engine performance parameters (i.e. indicated mean effective pressure, indicated specific fuel
consumption, temperatures, pressures, flow rates, etc.) and emissions (NOx, CO, CO2, THC, CH4, and O2) data to be
analyzed. Despite the advantages in terms of higher overall fuel efficiency and lower emission (compared to
conventional IC engines), the HCCI engine suffers from several drawbacks such as the difficulty to control ignition timing,
low power density, poor performance at high loads, and high un burnt hydrocarbon emissions [51]. Those technological
difficulties are a considerable impediment to a wide-spread adoption of HCCI technology and they are topic of current
research. Charge stratification, i.e. the introduction of controlled in homogeneities in the temperature and composition
of the HCCI charge, is considered a viable solution to the difficulties mentioned above. The presence of in homogeneities
in the charge (or stratification) is a challenge to current HCCI modelling methods. The Relative Flow Modelling
Laboratory at KAUST is involved in the development of computationally affordable methodologies to numerically predict
ignition in an HCCI engine with a thermally and mixture stratified charge. Additionally, we perform research in the
fundamentals of the ignition process in HCCI engines under stratification.
REFERENCES
[1] K. Yeom, J. Jang, Ch. Bae, “Homogeneous charge compression ignition of LPG and gasoline using variable valve timing in an engine,’’
International Journal of Fuel, 86, 494-503, 2007.
[2] L. Xingcai, H. Yuchun, Z. Linlin, H. Zhen, “Experimental study on the auto-ignition and combustion characteristics in the homogeneous charge
compression ignition (HCCI) combustion operation with ethanol/nheptane blend fuels by port injection,’’ Fuel, 85 , 2622–2631, 2006.
[3] A. C. Alkidas, “Combustion advancements in gasoline engines,’’ International Journal of Energy Conversion and Management, 48, 2751-2761,
2007.
[4] C. S. Daw, R. M. Wagner , K. D. Edwards, J. B. Green, “Understanding the transition between conventional spark-ignited combustion and HCCI
in a gasoline engine,’’ Proceedings of the Combustion Institute, 31, 2887–2894, 2007.
[5] X. Lü, L. Ji, L. Zu, Y. Hou, C. Huang, Z. Huang, “Experimental study and chemical analysis of n-heptane homogeneous charge compression
ignition combustion with port injection of reaction inhibitors,’’ International Journal of Combustion and Flame, 149, 261-270, 2007.
[6] J.J. Hernandez, J. Sanz-Argent, J. Benajes, S. Molina, “selection of a diesel fuel surrogate for the prediction of auto-ignition under HCCI engine
conditions,’’ International Journal of Fuel,87, 655–665, 2008.
[7] X. C. Lu¨, W. Chen, Z. Huang., “a fundamental study on the control of the HCCI combustion and emissions by fuel design concept combined
withcontrollable EGR. Part 2. Effect of operating conditions and EGR on HCCI combustion,’’ International Journal of Fuel, 84, 1084-1092, 2005.
[8] M. Yao ,Z. Chen,Z. Zheng,B. Zhang,Y. Xing., “Study on the controlling strategies of homogeneous charge compression igniti on combustion
with fuel of dimethyl ether and methanol,’’ International Journal of Fuel, 85, 2046-2056, 2006.
[9] J. Ma, X. Lü, L. Ji, Z. Huang, “an experimental study of HCCI-DIcombustion and emissions in a diesel engine with dual fuel,’’ International
Journal of Thermal Sciences, 47, 1235-1242, 2008.
[10] L. Shi, Y. Cui, K. Deng, H. Peng, Y. Chen., “ study of low emission homogeneous charge compression ignition (HCCI) engine using combined
internal and external exhaust gas recirculation (EGR),’’ International Journal of Energy, 31, 2665–2676, 2006.
[11] A. Dubreuil, F. Foucher, Ch. Mounaı¨m-Rousselle, G. Dayma ,Ph. Dagaut, “HCCI combustion: Effect of NO in EGR,’’ Proceedings of the
Combustion Institute, 31, 2879-2886, 2007.
[12] M. Sjoberg, J. E. Dec, “Comparing late-cycle autoignition stability for single- and two-stage ignition fuels in HCCI engines,’’ Proceedings of the
Combustion Institute, 31 , 2895–2902, 2007.
46
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
[13] N.K. Miller Jothi, G. Nagarajan, S. Renganarayanan, “LPG fueled diesel engine using diethyl ether with exhaust gas recirculation,’’
International Journal of Thermal Sciences, 47, 450-457, 2008.
[14] D. S. Kim , Ch. S. Lee, “ improved emission characteristics of HCCI engine by various premixed fuels and cooled EGR,’’ Inte rnational Journal
of Fuel,85, 695–704, 2006.
[15] X. C. Lu¨, W. Chen, Z. Huang, “A fundamental study on the control of the HCCI combustion and emissions by fuel design concept combined
with controllable EGR. Part 1. The basic characteristics of HCCI combustion,’’ International Journal of Fuel, 84, 1074-1083, 2005.
[16] J. Chang, Z. Filipi, D. Assanis, T-W Kuo, P. Najt, R. Rask., “Characterizing the thermal sensitivity of a gasoline homogeneous charge
compression ignition engine with measurements of instantaneous wall temperature and heat flux,’’ International Journal of IMechE, 2005.
[17] P. Yaping, T. Manzhi, G. Liang, L. Fafa, L. Hua , G. Yingnan, “Study the ethanol SI/HCCI combustion mode Transition by using the fast
thermal management system, ’’Chinese Science Bulletin, vol. 52, no. 19 , 2731-2736, 2007.
[18] J. Olsson, P. Tunestal, B. Johansson, S. Fiveland, R. Agama, M. Willi , D. Assanis,“Compression Ratio Influence on Maximum Load of a
Natural Gas Fueled HCCI Engine,’’ SAE, 02P-147, 2002.
[19] M. Canakci, “An experimental study for the effects of boost pressure on the performance and exhaust emissions of a DI -HCCI gasoline
engine,’’ International Journal of Fuel, 87, 1503-1514, 2008.
[20] K. Inagaki, T. Fuyuto, K. Nishikawa, K. Nakakita, I. Sakata, “combustion system with premixture-controlled compression ignition,’’R&D
Review of Toyota CRDL Vol.41 No.3, 2006.
[21] A report to the U.S. Congress, “Homogeneous Charge Compression Ignition (HCCI) Technology ’’, 2001.
[22] M. F. Yao, Z. L. Zheng and H. F. Liu, "Progress and recent trends in homogeneous charge compression ignition (HCCI) engines," Prog Energy
Combust Sci, vol. 35, pp. 398-437, Oct 2009.
[23] A. S. Premier. (2007, May 17,2010). HCCI could cut fuel consumption by 15%. Advanced Materials and Processes, P. 27.SREC 2010-F2-2 8
[24] A. S. Premier. (2004, May 17, 2010). And they're not hybrids: HCCI engines could bring breakthrough fuel efficiency. Machine Design (21),43-
44.
[25] Jacques L., Dabadie J., Angelberger C., Duret P., Willand J., Juretzka A., Schaflein J., Ma T., Lendresse Y., Satre A., Schulz C., Kramer H., Zhao
H., Damiano L., Innovative Ultra-low NOx Controlled Auto-Ignition Combustion Process for Gasoline Engine: the 4-SPACE Project. SAE Technical
Paper Series 2000, SAE 2000-01-1837.
[26] Ryo H., Hiromichi Y., HCCI Combustion in DI Diesel Engine. SAE Technical Paper Series 2003, SAE 2003-01-0745.
[27] Zheng Z., Yao M., Chen Z., Zhang B., Experimental Study on HCCI Combustion of Dimethyl Ether (DME) / Methanol Dual Fuel. SAE
Technical Paper Series 2004, SAE 2004-01-2993.
[28]Yap D., Megaritis A., Wyszynski M. L., An Investigation into Bioethanol Homogeneous Charge Compression Ignition (HCCI) Engine
Operation with Residual Gas Trapping. Energy & Fuels 2004, 18, 1315-1323.
[29] Yap D., Megaritis A., Wyszynski M. L., An Experimental Study of Bioethanol HCCI. Combustion Science and Technology 2005 (subm Feb).
[30] Jun D., Iida N., A Study of High Combustion Efficiency and Low CO Emission in a Natural Gas HCCI Engine. SAE Technical Paper Series
2004, SAE 2004-01-1974.
47
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
Analysis and optimization of surface roughness in the
dry Machining of Titanium Alloy Ti-6Al-4V using PVD
coated carbide insert using response surface
methodology and desirability function
K.Saraswathamma, S.Venkatarami Reddy
K.Saraswathammais currently Assistant Professor in Mechanical Engineering Department, University College of Engineering, Osmania University,
Hyderabad, India. [email protected]
S.Venkatarami Reddy is currently pursuing master’s degree in Mechanical Engineering Department, University College of Engineering, Osmania
University, Hyderabad, India.
Keywords
Ti alloy Ti-6Al-4AV, PVD coated tool, Response surface methodology, surface roughness, speed, feed, depth of cut.
ABSTRACT
In the present study, an optimization strategy based on desirability function approach (DFA) together with response surface
methodology (RSM) has been used to optimize the dry machining of Titanium alloy Ti-6Al-4V using PVD coated carbide insert.
A quadratic regression model was developed to predict surface roughness using RSM with Box Behnken design (CCD). In the
development of predictive models, cutting speed, feed and depth of cut were considered as model variables. The results
indicated that feed and cutting speed were the significant factors on the surface roughness.
48
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
1. INTRODUCTION
In today’s manufacturing industry, special attention is given to dimensional accuracy and surface finish. Thus,
measuring and characterizing the surface roughness can be considered as the predictor of the machining performance.
CNC Turning is a process that leads to an accurate change in the surface roughness of the work piece by a minor
amount of plastic deformation. In CNC Turning process, the metal on the surface of the work piece is redistributed
without material loss. Besides producing a good surface finish, the CNC Turning process has additional advantages such
as securing increased hardness, corrosion resistance and fatigue life as result of the produced compressive residual
stress and short lead time [1]. compressive residual stress and short lead time [1].
The use of Titanium alloy especially Ti-6Al-4V for industrial applications is rapidly increasing due to its high strength to
weight ratio, biocompatibility and robust mechanical properties at high temperatures. The Aerospace and marine
industries, gas turbines and biomedical implants are some of the areas of application of this alloys. However, poor
machinability caused by poor thermal properties and high reactivity at high temperatures hinder the manufacturing of
Ti6Al4v parts [2].
The literature review indicates that earlier investigations concentrated on the experimental study on effect of cutting
parameters on the surface roughness in turning of titanium alloy using response surface methodology. Very few study was
focused on experimental analysis at low speeds using CNC Turning in dry environment with PVD coated insert. RajendraPawar
and Raju Pawade[3] investigated the effect of machining parameters of CNC Turning of Ti-6Al-4V in dry environment. Mohsen
GhahramaniNik,& Mohammad R. MovahhedyJavadAkbari [4] investigated the effect of impo sition of ultrasonic vibration on
the grinding of TI6Al4V alloy. Narasimhulu And riya P.VenkateswaraRao and Sudharshan Ghosh [5] conducted experiments
under dry condition using PVD TiAlN inserts on Ti6Al4V turning. Syed Imran Jaffery and Paul T. Mativenga[6] have studied
wear mechanism associated with tool deterioration across different regions of the wear map. Anil K Srivatsava ,Xueping
Zhang, Tim Bell and Steve Cadigan[7] investigated on turning of TI-6Al-4V titanium alloy using super finished tool edge
geometry generated by micro machining process . M.Venkatramana, K.Srinivasulu, and G.Krishna mohanRao[8] studied the
performance Evaluation and optimization of process parameter in turning of Ti -6A alloy under different cooling conditions
using Taguchi’s design of experiments methodology on surface roughness by uncoated carbide tool. Satyanarayana Kosaraju,
Venugopal Anne and Bangarubabu Popuri[9] investigated the effect of process parameters on machinability characteristics
and there by optimization of the turning of Ti based using Tagauchi method. Z.G. Wang and et.al [10] investigated the effect
of different coolant supply strategies (using flood coolant, dry cutting, and minimum quantity of lubricant [MQL]) on cutting
performance in continuous and interrupted turning process of Ti6Al4V. L.Karunamoorthy , and K. Palanikumar[11] was
conducted experimental study on effect of cutting parameters on the surface roughness in turning of titanium alloy using
response surface methodology.
2. EXPERIMENTAL DETAILS
CNC Turning experiments were conducted on Ti6Al-4V work piece of rods 12mm diameter and 250mm length. First, the
work piece is held in CNC lathe chuck and facing operation is completed on both sides and rod is cut in to 15 number of
pieces. A PVD coated Single point cutting carbide tool is fixed in the tool post of CNC lathe and work pieces are turned
various speeds, feeds and depth of cut for each specimen. Table 1 lists the coded levels actual levels of different
parameters used in machining of Ti Al alloy. The experimental plan is given in Table 2.
TABLE 1: PROCESS PARAMETERS AND THEIR LEVELS
Process Units Levels
parameters
-1 0 +1
Speed RPM 40 60 80
Feed mm/rev 0.04 0.06 0.08
Depth of cut mm 0.5 1 1.5
49
International conference on Emerging Trends in Mechanical Engineering ICEME-2014
ISBN: 978-93-82163-09-1 .VOLUME 1 24TH ,25TH FEBRURY 2014
TABLE 2: EXPERIMENTAL PLAN AND SUMMARY OF RESPONSES
Std. Run Speed Feed DOC Surface
order order (rpm) (mm/rev) (mm) roughness
Ra (µm)
1 15 40 0.04 1 0.5
2 11 80 0.04 1
0.49
3 12 40 0.08 1 0.74
4 10 80 0.08 1 0.67
5 13 40 0.06 0.5 0.49
6 1 80 0.06 0.5 0.45
7 8 40 0.06 1.5 0.73
8 7 80 0.06 1.5 0.55
9 9 60 0.04 0.5 0.37
10 17 60 0.08 0.5 0.63
11 3 60 0.04 1.5 0.46
12 2 60 0.08 1.5 0.68
13 16 60 0.06 1 0.62
14 4 60 0.06 1 0.6
15 5 60 0.06 1 0.59
16 6 60 0.06 1 0.63
17 14 60 0.06 1 0.62
3. RESULTS AND DISCUSSIONS
To know the significance of the regression equation in explaining the relationship between surface roughness behavior and
controllable process parameters, F- test from the analysis of variance (ANOVA) was conducted. The contribution of each term
of the model in affecting percent improvement in response variable was found out through the sum of square method.
3.1 Response surface regression analysis
Sequential Model sum of Squares were calculated to select the highest order polynomial where the additio nal terms are
significant and the model is not aliased. Sequential model sum of squares (technically “type I) shows terms of the
increasing complexity contribute to the total model. The significance of adding quadratic term to two factor interaction
(2FI) and linear terms is highest, as it has high F-value and least p-value suggesting its suitability. Lack of fit test for each
model was calculated. For the selection model, the lack-of-fit should be insignificant (smallest F value).
On the basis of the sequential model sum of squares and lack of fit test quadratic model was selected initially all
quadratic terms – A, B, C, AB, AC, BC, A2, B2 and C2 were included in the Response Surface Model. After dropping
insignificant interaction terms AB, BC, A2 and B2 the analysis of variance (ANOVA) presented in Table 3.The model F-
value of 31.322 implies that the model is significant. This has significant improvement over previous model with
interaction terms. There is only a 0.01% chance that this large “Model F-Value” could occur due to noise. Value of
“Prob>F” less than 0.1 indicates model terms are significant, in this case A, B, C, AC, and C2 are significant model terms.
The “Pred R-squared” of 0.73 is a better agreement with the”Adj R-squared” of 0.90.
50