Journal of Manufacturing Systems 32 (2013) 700–714
Contents lists available at ScienceDirect
Journal of Manufacturing Systems
journal homepage: www.elsevier.com/locate/jmansys
Technical Paper
A new optimization approach for nozzle selection and component
allocation in multi-head beam-type SMD placement machines
S.A. Torabi a,∗ , M. Hamedi a , J. Ashayeri b
a
b
School of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran
Department of Econometrics & Operations Research, Tilburg University, Tilburg, The Netherlands
a r t i c l e
i n f o
Article history:
Received 3 April 2011
Received in revised form 25 August 2013
Accepted 22 September 2013
Available online 19 October 2013
Keywords:
PCB assembly
Multi-head beam-type SMD machine
Augmented epsilon-constraint method
Particle Swarm Optimization
Taguchi Method
a b s t r a c t
This paper addresses a highly challenging scheduling problem faced in multi-head beam-type surface
mounting devices (SMD) machines. An integrated mathematical model is formulated aiming to balance
workloads over multiple heads as well as improving the traveling speed of the robotic arm by incorporating the appropriateness factors in the model to evaluate the compatibility of component-nozzle pairs.
The proposed model is a bi-objective mixed integer nonlinear programming one, which is first converted
into a linearized model and then directly solved by using the augmented epsilon constraint method for
small problem instances. As the model is turned out to be NP-hard, we also develop a Multi-Objective
Particle Swarm Optimization (MOPSO) algorithm to solve the model for medium and large-sized problem instances. The parameters of the proposed MOPSO are tuned by using the Taguchi Method and
corresponding numerical results are provided.
© 2013 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
1. Introduction
The growing demand for electronic devices has made the manufacturing of printed circuit boards (PCBs) a promising industry
over the last decades. As the demand for printed circuit boards
increases, the industry becomes more dependent on highly automated assembly processes using surface mounting devices (SMD).
Surface mount technology (SMT) has displaced through-hole technology as the primary means of assembling PCBs. It has also made
it easy to automate the PCB assembly process. The component
placement machine is probably the most important piece of manufacturing equipment on a surface mount assembly line (Hardas
et al. [1]). As SMT becomes popular, different types of placement
machines have arisen. For a well-organized classification of placement machines based on their operational methods the reader is
referred to Ayob and Kendall [2]. Among the component placement machines, multi-head of gantry-type machines are becoming
increasingly popular because they provide high mounting speed
with relatively low cost. A gantry robot, which moves components
between the components feeder racks and the PCB, nowadays are
equipped with multiple heads to reduce the number of pick-andplace cycles. The heads are sequentially arranged on a beam or a
rotating wheel at the gantry robot. The former is called beam-type
∗ Corresponding author. Tel.: +98 21 88335605.
E-mail addresses: satorabi@ut.ac.ir (S.A. Torabi), mhamedy@ut.ac.ir (M. Hamedi),
j.ashayeri@uvt.nl (J. Ashayeri).
while the latter is called collect-and-place type (Sun and Lee [3]).
Both types of these machines can have single or multiple arms.
Production planning and control of SMT facilities can be considered as involving a hierarchy of six fundamental types of decision
problems, which can be grouped into three distinct, yet interdependent, levels. The machine level of the hierarchy deals with decisions
focusing on making the best use of the capabilities of an individual
machine (Rogers and Warrington [4]). McGinnis et al. [5] summarizes optimization problems for an SMD machine (at the machine
level) as feeder assignment and placement sequencing. The proper
assignment of component types to feeders in placement machines
and the placement sequence of components on the PCB are the main
factors greatly affecting the production cycle time of each machine
and the whole SMT line (Crama et al. [6]). These problems are highly
interrelated and very difficult to solve them simultaneously. Therefore, during the last decade, most research on minimizing the PCB
assembly time has focused on solving these problems separately
by decoupling one from the other (Lee et al. [7]). Many research
works have been devoted to these complex problems by developing various mathematical models and solution approaches. For
example, Ball and Magazine [8] modeled the sequencing problem in
a sequential pick-and-place machine as a directed postman problem. They suggested that the balance and connect heuristic can be
applied to this problem. Leipala and Nevalainen [9] dealt with the
placement sequencing sub-problem in the same machine of [8] as a
three dimensional asymmetric traveling salesman problem whilst
the feeder assignment sub-problem was modeled as a quadratic
assignment problem. Khoo and Loh [10] modeled the problem of
0278-6125/$ – see front matter © 2013 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.jmsy.2013.09.005
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
assembling a printed circuit board with a chip shooter as a multiobjective problem. They applied a genetic algorithm to generate
the placement sequences and feeder assignment. It should be noted
that the chip shooter is one type of SMD machines whose heads are
arranged on a revolving turret and picking and placing operations
are performed by rotating the turret. Ho and Ji [11] developed a
hybrid genetic algorithm to integrate placement sequencing, feeder
assignment and component retrieval sub-problems in a sequential pick-and-place machine. Their algorithm was found to perform
better than conventional genetic algorithms, especially if additional feeders there were allowing to some component types to
be assigned to multiple feeders. Moon [12] developed two different methods using special features on printed circuit boards
to simultaneously improvement of component’s rack assignment
and component mounting sequencing problems in chip shooter
machines like Panasert MSH-II, Fuji CP-II and CP-IV. Based on
results from field surveys, it is found that identical components are
positioned closely with each other or identical single boards are
repeatedly printed on one big board to enlarge up to a proper size
to be assembled in the machine. These patterns are adapted on the
design of assembly methods to increase productivity. Simulation
models are also constructed for performance evaluation purposes
of the developed heuristics.
As the focus of this paper is on multi-head placement machines,
we now take a closer look at modeling and solution approaches
which have been developed to deal with the concerned optimization problems. An SMD machine with multiple heads is one
of the most popular placement machines due to their relatively
high speed and low price. But, the complexity of their performance makes the respective optimization problems more difficult
to solve. The literature regarding these machines is rather scarce.
Van Laarhoven and Zijm [13] applied a hierarchical procedure for
solving the optimization problems of a set of beam-type multi-head
placement machines with three placement heads. All sub-problems
in the hierarchy were solved sequentially by simulated annealing
approach. They stated that their proposed method performs well in
balancing the workload over the machines. Magyar et al. [14] dealt
with the problem of sequencing of pick-and-place cycles; allocation
of nozzles to heads; and feeder assignment using a hierarchical approach. They considered a general surface mounting (GSM)
machine that is a beam-type multi-head placement machine. Initially, they solved the feeder assignment sub-problem by using a
greedy local search. The output of first sub-problem is used as the
input for nozzle optimization sub-problem and the output of nozzle optimization sub-problem considered as an input to component
pick-and-place sub-problem that is also solved using a greedy local
search approach. Their approach significantly decreased the cycle
time. Lee et al. [7] applied a genetic algorithm for a joint-solution of
the optimization sub-problems in a multi-head beam-type placement machine. They converted the optimization problem of a
multi-head machine to a single-head case by grouping feeders and
clustering of components. They utilized single-head methods to the
multi-head case. They also selected the partial-link structure for the
chromosomes. Hong et al. [15] implemented a biological immune
algorithm for optimization problem of a multi-head beam-type
placement machine. Jeevan et al. [16] applied a genetic algorithm
to minimize the cycle time in a beam-type multi-head machine.
They used the distance of a TSP tour as the fitness function of
genetic algorithm. However, they did not discuss the mathematical
modeling and chromosome definition in the paper. Grunow et al.
[17] followed a hierarchical approach for optimization problem
of a collect-and-place multi-head placement machine. They considered four sub-problems in their proposed hierarchy. These are
(i) feeder assignment; (ii) sub-tours composition; (iii) sequencing
of placement of components within a sub-tour; (iv) sequencing the sub-tours. A three-stage approach is applied for solving
701
the sub-problems. Sub-problem (i) is solved in stage one using a
greedy algorithm. In the second stage sub-problems (ii), (iii) and
(iv) are solved by modeling them as a vehicle-routing problem.
Given the feeder assignment solution from stage one, the authors
sequence the component pick-and-place operations using a heuristic approach. The final stage of solution approach improves the
feeder assignment and the component pick-and-place sequence
using a random descent 2-opt swapping procedure. Sun et al.
[18] considered the optimization performance of a dual-gantry
collect-and-place multi-head placement machine. They proposed
a hybrid genetic algorithm for solving the component allocation
and feeder assignment sub-problems along with a greedy algorithm for placement heads workload balancing. Raduly-Baka et al.
[19] presented different approaches for determining the number
of nozzles for populating a PCB type by a multi-head beam-type
machine. They assumed that each component type can be handled
using one nozzle type. Their nozzle selection problem optimally
solved using a three-phase greedy algorithm. They also investigated the nozzle selection in case of multiple PCB types. Kulak et al.
[20] proposed three different genetic algorithms for scheduling
operations of a collect-and-place placement machine. They considered the case of single and dual-gantry placement machines. They
integrated feeder assignment and placement sequencing using a
genetic algorithm. The authors claimed that their proposed genetic
algorithms are very efficient in terms of computational time, especially if adequate coding schemes are used. Recently, Sun and Lee
[3] developed a branch-and-price procedure for a placement routing problem for a beam-type multi-head placement machine. They
formulated the problem as an integer programming model with a
huge number of variables. They solved the linear relaxation of the
model by a column generation method. Li et al. [21] considered
the cycle time minimization of a SONY SE-1000 machine which
belongs to collect-and-place multi-head placement machines. They
assumed that the mounting sequence is given in advance and they
solved feeder assignment sub-problem using a genetic algorithm.
In their proposed genetic algorithm a uniform order crossover and
exchanging mutation is applied. Ashayeri et al. [22] proposed a
hierarchical approach of two stages for solving the PCB component placement problem on a beam-type multi-head placement
machine. In the first stage they formulated a single-objective mixed
integer program (MIP) with the variables based on batches of
components. This MIP is tractable and effective in assigning the
components and nozzles to the placement heads. While the MIP
produces an optimal planning for batches of components, a new
sequencing heuristics is developed in order to determine the final
sequence of component placements based on the outputs of the
MIP. They showed that this two-stage approach guarantees a good
feasible solution to the multi-head SMD placement optimization
problem.
Literature review regarding the multi-head SMD placement
machines reveals less attention to optimal utilization of the placement heads. In addition to this gap, the moving speed of the robotic
arm is highly dependent on the combination of nozzles and components currently loaded on the heads. Proper combinations will
improve significantly the machine utilization, an issue that has also
been neglected in the literature. Our focus in this paper is on single
arm beam-type multi-head placement machines. In this regard, we
develop a novel mathematical model to deal with the heads-related
optimization problems in such placement machines by addressing
the existing gaps in the literature. The augmented epsilon constraint method is applied to convert the original bi-objective model
into a single objective one which can be directly used as the solution method in small problem instances. A Multi-Objective Particle
Swarm Optimization (MOPSO) algorithm is also proposed to solve
the medium and large-sized problem instances within a reasonable
amount of time.
702
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
The remainder of this paper is organized as follows. A brief
description of concerned placement machine is provided in Section 2. Problem definition, formulation and a brief discussion about
the complexity of the proposed mathematical model are provided
in Section 3. The application of augmented ε-constraint method
in solving the small-sized instances is discussed in Section 4. A
detailed explanation of proposed MOPSO algorithm is provided in
Section 5 followed by its numerical results in Section 6. Finally,
Section 7 is devoted to concluding remarks and future research
directions.
types of nozzles for picking and placing components. Not each
nozzle is suitable for handling each component type. Large nozzles cannot pick small components and small nozzles cannot pick
large components. Therefore, it will be necessary to exchange nozzles sometimes. While the proper nozzle speeds up the placement
process, the nozzle exchange process is often time-consuming and
should be avoided as much as possible.
3. Problem definition and formulation
3.1. Problem hierarchy
2. Machine description
Fig. 1 illustrates the schematic view of the considered multihead beam-type placement machine in this paper. SMD machines
such as Yamaha YV-64/88/100, Samsung CP-40/50, Juki KE-750/760
and Assembléon AX 2.01 belong to this type of placement machines.
The machine has a fixed PCB table, a feeder bank, an arm that is
equipped with a number of placement heads and an Automatic
Nozzle Changer (ANC). The PCB remains fixed on the PCB table
during the placement process. A fixed feeder bank is located on
one side of the PCB table. The feeder bank consists of a number of
slots for positioning the feeders. Electronic components are supplied to the machine by feeders. Multiple heads are located on the
arm and move together with it simultaneously in both X and Y
directions. The assembly process starts by moving the arm toward
the feeder bank and picking up at most H (number of heads) components either simultaneously or on one by one basis by moving
along the feeder bank. Notably, it is possible to pick less than H
component in one pick-and-place tour. Then it moves to the PCB
to place the picked components on the specific locations on the
PCB. When the head positions exactly on the placement location, it
moves down in Z direction and mounts its component on the board.
After leaving the PCB table, the robotic arm then first visits ANC for
exchanging nozzles before going back to the feeder bank if there
is one or more components in the next pick-and-place tour requiring a nozzle different than those that are currently in use. When
the robotic arm positioned on the ANC, the unnecessary nozzle is
stored in an empty slot then a new nozzle is loaded on the head.
After nozzle is exchanged, the robotic arm goes toward feeder bank
and the mentioned processes will be repeated until all components
are mounted on the PCB. Noteworthy, each head can use various
Given a PCB type to be mounted with N components divided
into T types using a multi-head beam-type placement machine
equipped with H placement heads, the main problem can be
described through the following sub-problems:
(1) Assignment of feeders to feeder slots.
(2) Partitioning the N components into a number of clusters, each
of which consisting of at most H components (pertaining to a
pick-and-place tour).
(3) Sequencing of component clusters and within each cluster,
placement sequencing of its components, so that the cycle time
of the machine (i.e., the necessary time to mount all components on the PCB) is minimized.
Obviously this problem is extremely complex. In order to reduce
this complexity, a hierarchical decomposition approach is often
applied (see for example Ayob and Kendall [23]). In hierarchical approach, the main problem is decomposed into a series of
sub-problems in such a way the solution of each sub-problem generates required input data for the next sub-problem. The following
sub-problems are a more detailed description of above-mentioned
sub-problems:
i. For each pair of component type t (t = 1,. . .,T) and head h
(h = 1,. . .,H), determine the number of components of type t to
be handled by head h.
ii. For each pair of component type and head, determine the most
appropriate nozzle type handling the component on the head.
iii. For each pair of component type and head, determine which
components of type t are to be mounted by head h.
iv. Determine the component clusters.
v. Sequence the component clusters.
vi. Sequence the placement of components within each cluster.
3.2. Problem statement and assumptions
Fig. 1. The schematic view of a multi-head beam-type placement machine.
This problem was inspired by a real case in Assembléon during
a consultative work done by one of the authors with the company. Assembléon which was formerly known as Philips Electronic
Manufacturing Technology, develops, assembles, and distributes a
diverse range of SMD machines (especially, single arm beam-type
multi-head placement machines) and provides a broad range of
related services.
In this paper, the best way of distribution (assignment) of components over the heads of a single arm beam-type multi-head
placement machine is considered as a separate objective in the proposed mathematical model to minimize the load of bottleneck head
(i.e., the head with maximum load among others). Furthermore, in
order to handle the components on the heads, vacuum nozzles are
applied for pick and place operations. Throughout the literature
of SMD machines, the compatibility degree of each pair of nozzlecomponent type has always been considered in a 0–1 manner, i.e.,
it is assumed that a nozzle is capable of handling a component type
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
or not. But, in the real world the story is completely different, i.e.,
each component type can be handled by different nozzle types with
different degrees of compatibility, impacting the traveling speed of
the robotic arm.
Therefore, for the first time in the literature of SMD machines,
we introduce the appropriateness factors to evaluate the compatibility of each pair of nozzle-component type. Practically, we should
try to choose the most suitable nozzles for handling the components on the heads because the speed of the robotic arm depends
on the combination of nozzles and components currently loaded
on the heads. That is, large components picked with a small nozzle
cannot be moved as fast as smaller components picked with the
same nozzle. However, if the most suitable nozzles are applied for
handling the components, the arm can move faster. Accordingly,
maximizing the nozzles’ appropriateness function as the summation of all corresponding appropriateness factors in handling the
components is introduced as the second objective. Notably, these
two objective functions are partially conflicting objectives, i.e., in
the most cases (not always) adopting the best nozzles for handling the components on the heads may result in unwanted nozzle
exchanges which directly affects the workload of the heads. Therefore, finding a trade-off between the workload of bottleneck head
and the total appropriateness is of particular interest. This tradeoff is illustrated by the following example: suppose that at the end
of an assembly operation, one component remains and all heads
have been equipped with the least appropriate nozzles. Now the
trade-off is made between an exchange of the nozzle (exchanging
the current nozzle with a more appropriate one) on a head or handling the component with the current nozzle. In the former case, the
workload is increased as the result of an extra nozzle exchange but
the robotic arm can move faster. In the later case, the workload does
not change but the robot arm move slower in comparison with the
former case. It should be noted that since the number of nozzle
exchanges affects the workload of the heads directly, it is important to recognize that the minimization of workload of bottleneck
head does not necessarily imply the minimization of the diversity
of nozzle types.
Now we formulate the main sub-problems affecting the performance of utilizing the heads greatly, i.e., the sub-problems i and
ii of aforementioned problem hierarchy together as an integrated
model in such a way that:
• Each component type is handled by the most appropriate nozzle.
• The machine heads are loaded with the approximately same
workloads. In other words, the numbers of components which
are assigned to each head; are approximately equated through
minimizing the load of bottleneck head.
703
denote the very low, low, medium, good and very good appropriateness degrees, respectively. These values are assigned by
expert.
7. We may confront with a case that some component types cannot
be handled by available nozzles. In such a case, we ignore these
components type and they are manipulated at the next stage
manually.
3.3. Problem formulation
The following notations are used in the model formulation:
Indices
Index of component
types
Index of nozzles
Index of heads
Index of components
t = 1, 2,. . .,T
q = 1, 2,. . .,Q
h = 1, 2,. . .,H
i = 1, 2,. . .,N
Parameters
tq
Constant cost (time) of exchanging a nozzle on head h
The appropriateness factor when nozzle q handles
components of type t
Nt
Total number of components of type t; (
dt
The average distance of components of type t on the
PCB from the center of feeder rack
Average velocity of the robotic arm motion
The total time to pick a component of type t when the
head is positioned above the feeder plus the time to
place the component when the head is exactly
positioned above the PCB
h
T
v̄
ft
Variables
xth
t=1
Nt = N)
Total number of components of type t that are
assigned to head h
1; if component type t is handeled by nozzle q on head h
otherwise
1; if nozzle q is assigned to head h
0; otherwise
Total number of nozzle exchanges on head h (which
its minimum value is equal to the number of nozzles
assigned to head h minus 1)
ztqh
0;
Sqh
h
Using the aforementioned notations, the mathematical formulation of the problem can be written as follows:
min
maxh
max
h h
Q
T
H
+
T
2dt
v
t=1
+ ft
· xth
tq · ztqh
(1)
(2)
h=1 q=1 t=1
Subject to;
The assumptions made in formulating the concerned subproblems are as follows:
1. There is one feeder rack located in one side of the PCB table.
2. The number of heads is given in advance.
3. Multiple copies of each nozzle type exist and all of heads can be
equipped with nozzles of the same type.
4. Each component type must be handled by exactly one nozzle on
each head.
5. Each nozzle is automatically changed at the automatic nozzle
changer (ANC) when it cannot grip the required component.
6. The compatibility of each pair of component-nozzle is evaluated
by the appropriateness factors. We consider tq as appropriateness factor (degree) when nozzle q handles a component of type
t. These factors can be considered as fuzzy or crisp numbers. But
sufficiently, here we assume that they are crisp numbers of 0, 1,
3, 5, 7, 9 where zero is considered for the case that a nozzle cannot manipulate a component type. Other factors, i.e., 1, 3, 5, 7, 9,
Q
h ≥
Sqh − 1 h = 1, . . ., H
(3)
q=1
T
ztqh ≤ T · Sqh
h = 1, . . ., H; q = 1, . . ., Q
(4)
t=1
T
ztqh ≥Sqh
h = 1, . . ., H; q = 1, . . ., Q
(5)
t=1
xth ≤ Nt ·
Q
q=1
ztqh
t = 1, . . .T ; h = 1, . . ., H
(6)
704
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
Q
xth ≥
3.4. Linearization
ztqh
t = 1, . . ., T ; h = 1, . . ., H
(7)
The first objective could simply be linearized as follows:
q=1
Q
ztqh ≤ 1 t = 1, . . ., T ; h = 1, . . ., H
(8)
q=1
h h
+
T
2dt
v
t=1
+ ft
· xth
=ˇ
Hence the objective (1) can be modified to: min ˇ; with adding
the following constraints to the model:
H
Let maxh
xth = Nt
t = 1, . . ., T
(9)
ˇ≥
h=1
h h
+
T
2dt
v
+ ft
t=1
xth ≥0 and Integer
t = 1, . . ., T ; h = 1, . . ., H
(10)
h ≥0 and Integer
h = 1, . . ., H
(11)
· xth
for h = 1, . . ., H
Therefore, the linearized model can be written as model
(3)–(17).
3.5. Complexity of the mathematical model
ztqh ∈ {0, 1} t = 1, . . ., T ; q = 1, . . ., Q ; h = 1, . . ., H
(12)
Sqh ∈ {0, 1} q = 1, . . ., Q ; h = 1, . . ., H
(13)
The objective function (1) indicates that the workload of bottleneck head is minimized. The workload of each head consists of two
terms. The first term is the necessary time for nozzle exchanges and
the second one is an estimation of the time that the head travels
above the PCB and feeder rack. It is noteworthy that since the feeder
assignment and placement sequencing sub-problems should be
solved after the concerned ones here, the exact moving path of
the robotic arm is not given at this time. Therefore, an estimation
of the real traveling time is used for calculating the workload of
a head by considering the average distance of the locations of a
component type (for all component types) on the PCB from the
center of feeder rack. The second objective tries to maximize the
appropriateness function of using suitable nozzles for components.
Constraints (3) express the relation between the number of nozzles
that are assigned to head h and the number of nozzle exchanges.
The following expressions explain why constraints (4) and (5) have
been introduced:
⎧ T
⎪
⎪
⎪
if
ztqh = 0 then Sqh = 0
⎪
⎨
t=1
T
⎪
⎪
⎪
ztqh > 0 then Sqh = 1
if
⎪
⎩
⇒
t=1
⎧ T
⎪
⎪
⎪
ztqh ≤ M · Sqh
⎪
⎨
t=1
T
⎪
⎪
⎪
ztqh ≥Sqh
⎪
⎩
t=1
Notably, when all of assigned component types to a head can be
handled using one nozzle type; the maximum value of M is equal
to T; hence M has been replaced with T in constraint (4).
Constraints (6) and (7) state that only one nozzle must be
selected to handle each component type assigned to each head.
The following expressions describe how they have been formulated:
⎧
Q
⎪
⎪
⎪
if xth = 0 then
ztqh = 0
⎪
⎨
q=1
Q
⎪
⎪
⎪
x
>
0
then
ztqh = 1
if
⎪
th
⎩
q=1
⇒
⎧
Q
⎪
⎪
⎪
xth ≤ M ′ ·
ztqh
⎪
⎨
Our mathematical model is a mixed-integer programming
model. Part of first objective function (let alone h h ) together
with constraints (9) and (10) composes a minimax-type assignment
integer programming model. Complexity of the minimax-type
assignment problem has already been proved by Yu and Kouvelis
[24] that is NP-hard. Notably, the data structure in our model is
similar to but not exactly the same as the minimax assignment
problem.
Nevertheless, the proposed model is more difficult than the
minimax assignment problem. Our model has a set of additional
constraints. Meanwhile, decision variables in the minimax assignment problem are limited to be binary, but in the proposed model,
xth variables are non-negative integer ones. Consequently, it is
necessary to develop a heuristic to solve the problem especially
for medium and large-sized instances. In Section 5, we propose a
metaheuristic named Multi-Objective Particle Swarm Optimization
(MOPSO), to solve the proposed multi-objective problem.
4. The augmented ε-constraint method
A general multi-objective optimization problem consists of a
number of objectives to be optimized simultaneously in the feasible region. The general formulation of multi-objective optimization
problems can be written in the following form:
max(f1 (x), . . ., fp (x))
In this formulation, fi (x) denotes the ith objective function and
S indicates the feasible space. The ultimate goal is simultaneous
maximization of given objective functions. When, as in most cases,
some of the objective functions conflict with each other, there is no
exactly one solution but many alternative solutions. Such potential
solutions which cannot improve all the objective functions simultaneously are called efficient (Pareto optimal) solutions.
min ˇ
q=1
Q
⎪
⎪
⎪
x
≥
ztqh
⎪
th
⎩
max
(15)
Q
H
T
(16)
tq · ztqh
h=1 q=1 t=1
q=1
If all components of type t are allocated to head h, the maximum value of M′ can be replaced by Nt . Constraint (8) ensures
that if a component type is assigned to a head, it must be handled
by only one nozzle. Constraint (9) guarantees the dispersion of all
component types among the heads. Constraints (10) and (11) show
the integrality and non-negativity of variables xth and h . Finally,
constraints (12) and (13) show that ztqh and Sqh variables are binary.
(14)
subject to : x ∈ S
s.t.
ˇ≥
h h +
T
2dt
v
t=1
+ ft
· xth
for h = 1, . . ., H
(17)
Constraints (3)–(13)
A feasible solution x is called efficient if there does not exist
another feasible solution say x′ such that fi (x′ )≥fi (x) for all values
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
of i with at least one strict inequality. In other words, a solution x
/ x that increases
is called Pareto optimal if there is no other x′ =
some objective functions without degrading at least one other
objective function. Under this definition, we usually find several
efficient solutions estimating the trade-off surface. In this sense,
the search for an optimal solution has fundamentally changed from
what we see in the case of single-objective problems. However,
users practically need only one solution from the set of efficient
solutions. According to Miettinen [25], the multi-objective solution approaches can be classified into the four categories based
on the phase in which the decision maker involves in the decision making process: The first one does not use any preference
information (called no-preference). These methods solve a problem and give a solution directly to the decision maker. The second
one is to find all possible efficient solutions and then using the
decision maker’s preferences to determine the most suitable one
(called posteriori methods). The third approach is to incorporate
the preference information before the optimization process often
in terms of objectives’ weights resulting in only one solution at the
end (called priori methods). The fourth approach (called interactive methods) is to hybridize the second and third ones in which
the decision maker’s preferences is periodically used to refine the
obtained efficient solutions leading to guide the search space more
efficiently. In general, the second one, i.e., the posteriori approach is
mostly preferred by the researchers and practitioners since it is less
subjective than the others. By using posteriori methods, the decision maker is provided by a set of Pareto optimal solutions and the
most suitable one is finally selected based on her/his preferences.
Here, the two most popular posteriori methods, i.e., the weighted
sum and ε-constraint methods are described briefly.
In the weighted-sum method, all the objectives are aggregated
into a single objective by using a weight vector. Although the
weighted-sum method is simple and easy to use, there are two
major problems. Firstly, there is the difficulty of selecting the
weights in order to deal with scaling problems since the objectives
usually have different magnitudes causing biases when searching
for trade-off solutions. Secondly, the performance of the method is
heavily dependent on the shape of the Pareto optimal frontier so
that it cannot find all the optimal solutions for problems that have a
non-convex Pareto optimal frontier. To overcome these difficulties,
the ε-constraint method has been introduced in which only one
objective is optimized while the others are moved to constraints.
The ε-constraint method for solving model (14) can be shown as
follows:
max f1 (x)
s.t.
f2 (x)≥e2
f3 (x)≥e3
(18)
...
fp (x)≥ep
x ∈ S.
By this method, via systematic variation in the RHS of the constrained objective functions (i.e., the ei values) and solving the
respective single-objective models, the efficient solutions can be
obtained effectively. Although the ε-constraint method does not
suffer from the difficulties that the weighted-sum does, some ambiguities about this method are considerable. In order to resolve these
ambiguities, recently, Mavrotas [26] proposes a novel version of the
conventional ε-constraint method, i.e., the augmented ε-constraint
method (hereafter it is called AUGMECON) which is discussed in
more details at below.
705
To find the most preferred efficient solution of the proposed
bi-objective model, we apply the augmented ε-constraint method.
Although ε-constraint method has several advantages over the
other posteriori methods, three points about the implementation
of this method should be taken into account:
a. The estimation of the range of objective functions over the efficient set
b. The guarantee of efficiency of the obtained solutions
c. The increased solution time for problems with more than two
objectives.
In order to tackle these issues, Mavrotas [26] presents a novel
version of the conventional ε-constraint method, i.e., the augmented ε-constraint (AUGMECON) method. Here we take a closer
look at this method to see how it can be implemented in practice.
The first step in applying the ε-constraint method is to determine
the range of objective functions which are used as constraints. To do
so, we should calculate the best (ideal) and worst (nadir) values of
objective functions over the feasible space. The best value could
be calculated as the optimal solution of individual optimization
over the feasible space but the worst value is not easily attainable. Usually, the worst value is estimated from the payoff table
(a table which is comprised of the results of individual optimization of objective functions). In this manner, the worst value of each
objective function is approximated with selecting the minimum
value of corresponding column.
In the case of alternative optima, the solutions obtained from
the individual optimization of objective functions may not be an
efficient but weakly efficient one. In order to overcome this ambiguity, Mavrotas [26] proposes the use of lexicographic optimization
for each objective function to construct the payoff table ensuring
to yield just Pareto optimal solutions. The lexicographic optimization is applied as follows. We optimize the first objective function,
obtaining max f1 = z1∗ . Then, we optimize the second objective
function by adding the constraint f1 ≥z1∗ in order to keep the optimal solution of the first optimization. Assume that we obtain max
f2 = z2∗ . Subsequently, we optimize the third objective function by
adding the constraints f1 ≥z1∗ and f2 ≥z2∗ in order to keep the previous optimal solutions and so on, until we finish with the objective
functions.
The second point is that the optimal solution of the conventional ε-constraint is guaranteed to be an efficient solution only if
all the (p − 1) objective functions’ constraints are biding; otherwise,
if there are alternative optima (that may improve at least one of the
non-binding constraints that corresponds to an objective function),
the obtained optimal solution of the problem is not in fact efficient, but is a weakly efficient solution [26]. In order to overcome
this ambiguity Mavrotas [26] proposes the transformation of the
objective function constraints to equalities by introducing slack or
surplus variables. In the same time, these slack or surplus variables
are used as a second term (with lower priority) in the objective
function to force the model to produce only efficient solutions. In
this way, the new problem (AUGMECON model) can be written as
follows:
max(f1 (x) + eps × (s2 + s3 + . . . + sp ))
s.t.
f2 (x) − s2 = e2
f3 (x) − s3 = e3
···
fp (x) − sp = ep
x ∈ S and si ∈ R+
(19)
706
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
Table 1
The parameters of sample problem.
Table 4
Values of (t,q).
Number of
Value
q
Heads
Component types
Nozzles
3
10
4
Table 2
The values of d(t), f(t) and N(t).
1
2
3
4
5
6
7
8
9
10
d(t)
f(t)
N(t)
7
0.1
3
6
0.2
4
4
0.1
6
7
0.5
2
5
0.2
2
4
0.3
2
4
0.3
3
7
0.9
4
3
0.9
9
2
0.8
7
Where eps is a small number (usually between 10−6 and 10−3 ).
Mavrotas [26] proves that AUGMECON produces only efficient solutions, i.e., it avoids to generate weakly efficient solutions. In order to
avoid any scaling problems, Mavrotas [26] recommends to replace
the si in the second term of the objective function by si /ri , where
ri is the range of ith objective function obtained from payoff table.
Thus, the final version of augmented ε-constraint method is written
as follow.
max
f1 (x) + eps ×
sp
s3
s2
+
+ ··· +
r2
r3
rp
s.t.
f2 (x) − s2 = e2
(20)
f3 (x) − s3 = e3
···
fp (x) − sp = ep
x ∈ S and si ∈ R+
The third point in the conventional ε-constraint method is the
additional computations when the problem becomes infeasible.
Mavrotas [26] adds an innovative addition to the algorithm, i.e.,
the early exit from the nested loops when the problem becomes
infeasible. He states that this issue can accelerate the algorithm
speed significantly in the case of having several (more than two)
objective functions.
Practically, the AUGMECON method is implemented as follows:
From the payoff table we obtain the range of each (p − 1) objective functions that are going to be used as constraints. Then we
divide the range of the ith objective function into qi equal intervals
using (qi − 1) intermediate equidistant grid points. Thus, we obtain
in total (qi + 1) grid points that are used to vary parametrically the
RHS of the ith objective function (ei ). Therefore, the total number of
single-objective models (runs) which certainly lead to the generation of efficient solutions becomes (q2 + 1) × (q3 + 1) × · · ·(qp + 1)
ones.
4.1. An illustrative example
In this section to show the applicability and usefulness of the
proposed model and solution method, we provide an illustrative
example, for which we generate a set of 5 efficient solutions using
the augmented epsilon constraint method. The inputs of the sample
problem are summarized through Tables 1–4.
Table 3
The values of
Head
h
1
2
3
4
5
6
7
8
9
10
t
t
1
1
1
4
4
2
3
1
4
3
5
2
5
4
3
6
5
7
6
4
2
5
3
2
3
7
6
4
1
4
3
5
6
4
5
5
5
5
4
5
2
2
1
2
The first step in applying the AUGMECON is to construct the
payoff table using the lexicographic optimization as follows:
First we optimize the first objective function over the feasible region by which we obtain the optimal solution (37.57, 0) in
the objectives space (i.e., point x1∗ in the decision space). Then,
the second objective is optimized with the additional constraint
ˇ ≤ 37.57 by which we obtain the optimal solution (37.57, 148) in
the objectives space (i.e., the first row of payoff table) which is a
non-dominated solution dominating the previous one.
For constructing the second row of payoff table, we first optimize the second objective over the feasible region by which we
obtain the optimal solution (40.94, 180) in the objectives space.
Then, the first objective is optimized with the additional constraint
H Q T
· ztqh ≥180 by which we obtain the same optih=1
q=1
t=1 tq
mal solution which ensures that it is a non-dominated solution.
Consequently, the payoff table is constructed in Table 5.
After the construction of payoff table, we divide the range of
the second objective function to four equal intervals and we use
the resulting five grid points as the values of e2 . Hence, vector e2 is
written as e2 = (148, 156, 164, 172, 180). Now for each component
of e2 the following model is solved:
min ˇ − eps
s.t.
Q
T
H
s
2
r2
(21)
tq · ztqh − s2 = e2
h=1 q=1 t=1
Constraint (16)
Constraints (3)–(13)
s2 ∈ R+
Where r2 denotes the range of second objective function from
the payoff table which is equal to 32. In this manner, for each
given value of e2 , the optimal solution of above model will certainly generate an efficient solution. Table 6 shows the resulting
non-dominated solutions.
It is noteworthy that the early exit option does not require for
our problem because our problem is a bi-objective one and we
vary only one RHS value, therefore, we do not have nested loops
in this case. Furthermore, the augmented epsilon-constraint version of the proposed model was coded in GAMS and the CPLEX
7.5 solver was used for solving the corresponding single-objective
models on a 2.0 GHz Dual Core CPU with 1 GB of RAM. The above
Table 5
Payoff table.
h.
1
1
2
1
3
1
x1∗
x2∗
Z1
Z2
37.57
40.94
148.00
180.00
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
Table 6
Non-dominated solutions found by AUGMECON.
Z1 (min)
Z2 (max)
37.57
37.74
38.27
39.62
40.94
148.00
159.00
168.00
174.00
180.00
sample problem was solved within the 4.12 s of CPU time.The output of the AUGMECON approach is a good estimation of Pareto
front of the problem under consideration including several Paretooptimal solutions. Since none of these solutions is absolutely better
than the other ones, the decision maker chooses the most preferred
compromise solution among them based on his/her preferences.
5. The proposed MOPSO algorithm
Particle Swarm Optimization (PSO) is a stochastic, populationbased algorithm inspired by the flocking of birds, schooling of fish,
and the swarming of certain kinds of insects in search of food and
avoidance of danger (Eberhart and Kennedy [27]). Each particle
in the population represents a potential solution to the problem
at hand. Although, PSO has originally been designed for continuous problems, we will adapt it for our concerned problem having
the several discrete variables. While PSO is sometimes classified
as an evolutionary algorithm (EA), in its basic form, PSO differs
from other EAs in some important aspects (Coello et al. [28]). Most
notably, PSO does not use the concept of replacement of individuals in the population which is the case in other EA approaches.
In Genetic Algorithms, for example, the members with less fitness
values are eliminated over the successive iterations to raise the
overall quality of the population. But in PSO, instead a cooperative
approach is used between the particles by which the information about good solutions is shared between all members of the
population (Engelbrecht [29]). A particle that has low fitness in
a given iteration may become a better solution later by moving
toward the best solution that its neighbors have seen. Each particle is commonly represented by a three D-dimensional vector:
Position vector denoted by Xi = (xi1 , xi2 , . . ., xiD ), velocity vector
denoted by Vi = (vi1 , vi2 , . . ., viD ), and the best personal position
vector denoted by Pbesti = (pbesti1 , pbesti2 , . . ., pbestiD ). Particles
also have knowledge of the best visited global position denoted by
gbesti = (gbesti,1 , gbesti,2 , . . ., gbesti,D ). The basic PSO implementation uses only two equations: an equation for updating velocity and
one for updating position. In this way, the velocity and position of
particle i in dimension d at iteration t + 1 is given by Eqs. (22) and
(23) as follows:
vi,d (t + 1) = w · vi,d (t) + c1 · r1 · (Pbesti,d (t) − xi,d (t)) + c2 · r2
·(gbesti,d (t) − xi,d (t))
(22)
707
referred to the “cognitive” term. The difference between this “particle’s best” (for dimension d) and the current value of the particle
is multiplied by a cognitive coefficient c1 and a randomly selected
value r1 in the range of [0,1]. The cognitive component in the velocity equation represents the best performance of a particle up to
iteration t. In the third term which is sometimes referred to “social
term”, the difference between the group best (for dimension d) and
the current value of the particle is multiplied by a social coefficient
c2 and a random number r2 in the range of [0,1]. The social factor represents the best solution of swarm seen up to iteration t. The
social and cognitive factor, c1 and c2 (usually held static), are sometimes referred to as acceleration coefficients and can be weighted
to give relatively more importance either to the social or cognitive
term. It has empirically been observed that the values in the range
[0.5, 2.5] are generally best (Engelbrecht [29]). Also, Coello et al.
[28] used values in the range [1.5, 2.5] in their study.
In order to apply the PSO strategy for solving multi-objective
optimization problems, it is obvious that the original scheme has to
be modified accordingly. Notably, the solution set of a problem with
multiple objectives does not consist of a single optimal solution
(as in global optimization). Instead, in multi-objective optimization, we aim to find a set of trade-off solutions (i.e., efficient or
Pareto optimal set). In single objective PSO, the best solution found
so far by particles, and the best solution found so far by the particle i itself are updated at each iteration. However, in the case
of multiple objectives, the notion of “best” no longer is defined.
Thus, the main challenge in MOPSO is to pick suitable personal
guide (Pbest) and global guide (gbest) to move the particles through
the space. In general, a good MOPSO method must obtain solutions with (a) a good convergence, and (b) a good diversity and
spread along the Pareto-optimal frontier. Most of the research so
far in the area of MOPSO has concentrated on the selection of gbest
for each individual. The first attempt in a Multi-Objective Particle Swarm Optimization problem was presented by Moore and
Chapman [31]. In their approach, each particle keeps its all nondominated solutions experienced so far. Mostaghim and Teich [32]
provided a new alternative approach (the sigma method) to push
the particles toward the frontier. At each iteration, sigma values
are calculated for both archive and swarm, and a gbest for every
individual is chosen using the minimal sigma distance to it. Jiang
et al. [33] proposed an improved PSO algorithm in which several
sub-swarms are used to enhance both exploration and exploitation
mechanisms by sharing information gained from each sub-swarm.
Niu et al. [34] proposed a cooperative PSO, in which a population
consists of a master swarm and several slave swarms. Each slave
swarm conducts a single objective particle swarm to keep the diversity in search space while the master swarm is improved through its
own experience and those of slave swarms. The general scheme of
proposed MOPSO in the form of its pseudo-code has been depicted
in Fig. 2. Based on this scheme, the details of the proposed algorithm
are illustrated in Fig. 3.
5.1. Solution representation
xi,d (t + 1) = xi,d (t) + vi,d (t + 1)
(23)
The right hand side of Eq. (22) includes three terms; the first
term is the current velocity times an inertia factor w. The use of
the inertia factor for controlling the velocity has resulted in high
efficiency for PSO. Suitable selection of the inertia factor provides a
balance between global and local explorations. However, the determination of inertia factor is still an unsolved problem. Shi and
Eberhart [30] concluded that the role of the inertia factor is considered to be crucial for the convergence of PSO. A larger inertia factor
facilitates global exploration (searching new areas), while a smaller
one tends to facilitate local exploitation. The second term represents what the particle itself has learned so far, and it is sometimes
Each particle consists of four matrices as follows:
[X]T ×H involves non-negative integer genes determining the number of components of type t that are assigned to head h for all heads
and component types. The components of this matrix are between
[0, max Nt ]
[M]1×H consists of integer genes signifying the number of nozzle
exchanges that occur on head h for all heads. The components of
this matrix are non-negative integer numbers.
[Z]T ×Q ×H includes genes determining the type of nozzle which
handles the components of type t on head h; for all component
708
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
//Number of particles
Input: N
//Pareto-Optimal Set
Output: A
Initialization:
•
Initialize a population of particles
Initialize the position of particles randomly
Initialize the velocity of particles using equations (24) and (25)
Initialize the personal Pareto archive for each particle
Initialize the global Pareto archive
While a given maximal number of iterations is not achieved
•
Pareto non-dominated sorting
•
Update the global Pareto archive
•
Determine the best global position for each particle using the sigma method which is
explained in section 5.3.1
•
Determine the best personal position for each particle using the procedure explained in
section 5.3.2
Update the velocity for each particle using equation (22)
•
Update the position of discrete and binary components of the particles using procedure
explained in section (5.4.1) and (5.4.2) respectively
•
Report the non-dominated solutions of global Pareto archive as a result
End while
Fig. 2. The pseudo-code of proposed MOPSO.
Start
NO
•
•
•
IniƟalize a swarm of parƟcles
IniƟalize the personal Pareto archive
for each parƟcle
IniƟalize the global Pareto archive
IteraƟon=1
IteraƟon=IteraƟon+1
Update the posiƟon of each
parƟcle using the procedures
explained in secƟons (5.4.1) and
(5 42)
IniƟalize the velocity of each
parƟcle using eq. (24) and
(25)
IniƟalize the posiƟon of
each parƟcle randomly
for all parƟcles
Find non-dominated soluƟons among
the current soluƟons and add them to
the global Pareto Archive
YES
Report the updated
Pareto set as output
Update the velocity of each
parƟcle using eq. (22)
End
Select global best for each
parƟcle using sigma method
explained in 5.3.1
Evaluate objecƟve funcƟons
and
IteraƟon>Max
number of iteraƟons
Select Personal best for each
parƟcle using the method
explained in 5.3.2
Update global Pareto archive
based on non-dominancy
concepts
Fig. 3. The flowchart of MOPSO.
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
709
Fig. 4. Representation of particle i.
types, nozzles and heads. The components of this matrix are binary
numbers (0 and 1).
[S]Q ×H consists of genes determining which nozzles have assigned
to head h, for all nozzles and heads. The components of this matrix
are also binary. Fig. 4 shows the structure of sample particle i.
5.2. Generating initial velocities
As mentioned in the previous section, the components of each
solution are integer numbers, but continuous values are assigned
to velocities. Velocities of a particle form a matrix of the equal size
with solution matrix. Initial velocities are set as follows:
vid = (vmin + (vmax − vmin ) · r1 )
(24)
where vmin = −4, vmax = 4, and r1 is a uniform random variable
between [0,1].
After updating the velocities in each iteration, a piece-wise linear function is used to force velocity values to be inside the interval
[vmin , vmax ]. So, whenever a velocity is computed, the following
piece-wise function, whose range is closed interval [vmin , vmax ], is
used to restrict them to minimum and maximum values.
vid =
⎧
v ; if vid > vmax
⎪
⎨ max
⎪
⎩
vid ;
if
vid ∈ [vmin , vmax ]
0;
if
vid < vmin
necessary in order to reach convergence by the Sigma method. The
Sigma method chooses the guide particle by assigning a value to
the particles in the archive and swarm. For each particle, is defined
as the gradient of the line connecting the location of the particle to
the origin of the objective space, i.e., when two particles lie on the
same line, their values are equal. For each particle in the swarm,
the algorithm selects the guide particle from the archive by finding
the archive member with the closest compared to i of the swarm
particle. This methodology allows the particles to move directly
toward the Pareto-optimal frontier by selecting the guide particle
that orientates the particular particles to the origin in objective
space. For a 2-dimensional objective space, i is defined as follows:
=
5.3.1. gbest selection
Among the proposed methods for selecting global leaders for
particles, we apply the Sigma method. The Sigma method involves
choosing the guide particle based on the similarity of the angular
position in objective space of an archive member and a particle in
the swarm. This method was proposed by Mostaghim and Teich
[32] and was shown to promote convergence and diversity. However, it was noted by the authors that a larger swarm size was
2
2
f1 − f2
(26)
2
For the problem with m objective function, is a vector of
elements in which each element of is the combination of two
coordinates in terms of previous as below:
⎛
(25)
5.3. Leader selection
f12 − f2
=
2
f1 − f2
2
⎞
⎟
⎜ 2
⎜ f2 − f3 2 ⎟
⎝
⎠
..
.
2
2
2
(f1 + f2 + f3 + · · ·)
(27)
Using the basic idea of Sigma method and by considering the
objective space, finding the best global guide (gbesti,d ) among the
archive members for the particle i of the population is as follows: in
the first step, the value j is assigned to each particle j in the archive.
In the second step, i for the particle i of the population is calculated. For each particle i, the archive member j which its i has the
minimum distance to i is selected as the best global leader for particle i during the iteration. In the case of two-dimensional objective
space, difference between the sigma values of two particles indicates the distance between them; and in the case of m-dimensional
710
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
objective space, the m-Euclidian distance between the sigma values
is considered as the distance between the particles.
5.3.2. Pbest selection
After updating the position of particles in the swarm, for each
particle, if the new position dominates the previous one, the new
position is considered as Pbest for the next iteration. Otherwise, the
previous position is selected as Pbest.
5.4. Position updating
1
(28)
1 + e−vid (t)
And the new position of the binary components is obtained
using the Eq. (29):
xid (t + 1) =
1; rid < sig(vid (t + 1))
0;
⎤
⎡
p
n
f1 = ˇ + ⎣
cj · Lj ⎦
ri · Gi +
(29)
otherwise
Where rid is a uniform random number in the range of [0,1].
5.4.2. Updating the position of integer components of matrices
[X]T ×H and [M]1×H
Eq. (23) is applied for updating the velocity of components of
these matrices and the results are rounded to the nearest integer.
f2 =
Q
H
T
h=1 q=1 t=1
(32)
j=1
i=1
5.4.1. Updating the positions of the components of matrices
[Z]T ×Q ×H and [S]Q ×H
As mentioned before, the components of matrices Z and S are
binary numbers. Although the original version of PSO had designed
for continuous problems, Kennedy and Eberhart [35] proposed a
discrete binary version of PSO. Their algorithm uses the same velocity updating equation as in (22) but the values of ‘X’ are now discrete
and binary. For position updating, first the velocity is transformed
into [0,1] interval using the sigmoid function given by:
sig(vid (t)) =
Gi = max[0, gi (x)]ˇ , Li = |hj (x)| , where ˇ and ␥ are normally set
to 1 or 2.
The first step in applying this approach for our problem is to convert the inequality and equality constraints to the form of gi (x) ≤ 0
and hj (x) = 0 respectively. Then we define the new objective functions as follows:
⎡
⎤
p
n
tq · ztqh − ⎣
ci′ · Lj ⎦
ri′ · Gi +
i=1
(33)
j=1
Where ri , ci , ri′ and ci′ are penalty factors. In this manner, f1 and f2
are used as our updated objective functions in the leader selection
and evaluation of particles in the proposed MOPSO algorithm.
6. Numerical experiments
In order to validate the efficiency of the proposed MOPSO algorithm, it is implemented on a number of randomly generated test
problems. As the performance of a metaheuristic is highly dependent on the values of input parameters, we have tried to select the
optimum values of parameters for each test problem using the
Taguchi Method. Each parameter has been tested in four levels
and the L16 array is selected as the orthogonal array. In order to
avoid random error, each setting is run 10 times and the average
results are used in the analysis. The algorithm has been compiled in
MATLAB 7.3.0 and all numerical examples were tested on the same
computer which the AUGMECON was run.
5.5. Updating the global Pareto archive
6.1. Test problems
The new values of components of matrices [Z]T ×Q ×H , [S]Q ×H ,
[X]T ×H and [M]1×H determine the new positions of the particles.
These positions are related to the solutions of the problem in the
current iteration. Among these solutions, the efficient solutions are
selected and added to the global Pareto archive. After adding, the
archive is updated based on dominancy concepts and the updated
archive is reported as the efficient solutions found so far.
The numerical experiments contain three classes of test problems. Each class includes10 test problems of different sizes
generated according to Table 7. The number of component types,
the number of nozzles and the number of heads are generated from
uniform distributions which are summarized in Table 8. Furthermore, the values of appropriateness factors are selected from {1, 3,
5, 7, 9} with equal probabilities of 0.2.
5.6. Constraints handling method
6.2. Comparison metrics
As the problem we are dealing with is a constrained one, we
should convert it to an unconstrained problem. In this regard, the
most common method used in evolutionary algorithms is the exterior penalty approach. Here, we take a closer look at this approach.
Consider the following constrained optimization problem:
It is well accepted that the multi-objective optimization problems require multiple, but evenly distributed, solutions to form a
Pareto-optimal frontier to make them useful to the human decision makers. There are a number of metrics available to compare
the performance of different algorithms (see for example, RahimiVahed et al. [36]). We use the following metrics to compare the PSO
algorithm for different levels of parameters:
gi (x) ≤ 0;
i = 1, . . ., n
hj (x) = 0;
j = 1, . . ., p
(30)
The general formulation of the exterior penalty function is:
⎡
⎤
p
n
cj · Lj ⎦
ri · Gi +
(x) = f (x) + ⎣
i=1
• Spacing metric: This metric allows us to measure the uniformity of
the spread of solution set. The metric used in this paper is defined
as follows:
(31)
j=1
Where (x) is the new objective function to be optimized, Gi and
Lj are functions of the constraints gi (x) and hj (x), respectively, and
ri and cj are positive constants called “penalty factors”. The most
common form of Gi and Lj are as follow:
S=
n
2
1
(d − di )
×
n−1
i=1
1/2
(34)
where n and di denote the number of efficient solutions found by
the algorithm, and the Euclidean distance between the solution i
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
Table 7
The structure of test problems.
711
Table 9
PSO parameters and their levels.
Class
Test problem
No. of nozzles
No. of heads
I
I-1
I-2
I-3
I-4
I-5
I-6
I-7
I-8
I-9
I-10
No. of component types
11
28
40
19
28
33
48
45
43
34
19
11
8
16
11
18
14
16
19
19
3
3
3
4
4
3
3
3
4
5
II
II-1
II-2
II-3
II-4
II-5
II-6
II-7
II-8
II-9
II-10
87
51
60
84
71
73
72
76
97
92
21
45
29
26
35
45
40
33
31
41
6
5
7
6
6
5
6
7
6
6
III
III-1
III-2
III-3
III-4
III-5
III-6
III-7
III-8
III-9
III-10
150
129
134
182
153
134
164
166
190
182
65
59
61
57
62
64
61
64
56
68
9
8
8
8
9
10
10
10
10
9
and the closest solution that belongs to the efficient set respectively, and d̄ is the average of di values.
• Diversification metric: this metric measures the spread of the solution set. It is defined as:
n
max(xi − yi )
D=
(35)
i=1
where |xi − yi | is the Euclidean distance between two efficient
solutions xi and yi .
• The number of Pareto-optimal solutions: This metric shows the
number of Pareto-optimal (efficient) solutions found by the algorithm.
Parameter
Code
Level
1
2
3
4
Cognitive factor
Social factor
Swarm size
Number of iterations
Inertia factor
C1
C2
N
Itr
w
0.5
0.5
20
200
0.1
1
1
50
500
0.2
1.5
1.5
100
1000
0.3
2
2
200
2000
0.4
contributions are first listed and then described in a systematic and
analytical manner.
Some of Taguchi’s orthogonal array is related to the conventional factorial designs. The conventional method involves one
variable at a time, while keeping the other parameters at fixed
levels. This method is generally time consuming and requires a considerable number of experiments to be performed. For example,
if there are k factors with l levels defined for each of the factors,
then it is necessary to carry out lk number of experiments. On
the other hand, the matrix experiments using orthogonal arrays
enable to study the effect of several factors simultaneously with
reduced number of experiments. Depending on the number of process parameters and setting levels, a suitable array is selected (Ross
[39]). Each column of the orthogonal array designates a process
parameter and its setting levels in each experiment and each row
designates an experiment with the level of different process parameters in that experiment. Taguchi suggests S/N ratio, which is a
logarithmic function of desired response, serves as the objective
function for optimization (Ross [39]).
In order to optimize the multiple responses, Taguchi design is
not applied directly, as each performance characteristic may not
have the same measurement unit. Hence, the evaluation of various characteristics should be combined to give a composite index.
Such a composite index represents the utility of a product. The
overall utility of a product is the sum of utilities of performance
quality characteristics (Kumar et al. [40]). If Xi is a measure of
effectiveness of an attribute i and there are n attributes evaluating the outcome space,
nthen the overall utility function is given by:
U (X ) where Ui (Xi ) is the utility of the ith
U(X1 , X2 , . . ., Xn ) =
i=1 i i
attribute. Depending upon the requirements, the attributes may be
given priorities and weights. Hence,
utility
nthe weighted form of
the
n
function is: U(X1 , X2 , . . ., Xn ) =
w U (X ) where
w =1
i=1 i i i
i=1 i
and wi is the weight assigned to attribute i (Gaitonde et al. [36]).
6.3. Taguchi Method with utility concept
6.4. Parameters tuning
An orthogonal array is a major tool in the Taguchi design, which
is used to study several parameters by means of a single quality
characteristic. The purpose of conducting an orthogonal experiment is to determine the optimum level for each controllable
parameter and to establish the relative significance of individual
parameters in terms of their main effect on the response (Gaitonde
et al. [37]).
Maghsoodloo and Ozdemir [38] review Taguchi’s contributions
to the field of quality and manufacturing engineering from both a
statistical and an engineering viewpoint. In their paper, the major
In this paper, five parameters, namely, cognitive factor, social
factor, swarm size, number of iterations, and inertia factor are
considered. The PSO parameters identified in this paper are multilevel factors and their outcome effects are not linearly related; and
hence, it has been decided to use four-level tests for the parameters. The PSO parameters and their levels are presented in Table 9.
Taguchi Method begins with the selection of orthogonal array with
distinct number of levels defined for each of the parameters, cognitive factor, social factor, swarm size, number of iterations, and
inertia factor. The minimum number of trials in the orthogonal
array is given by:
Nmin = (l − 1)k + 1, where, k = number of factors = 5 and
l = number of levels = 4. This gives Nmin = 16; and hence, according
to Taguchi design concept, L16 orthogonal array has been selected.
Thus, only 16 experiments are required to study the entire PSO
tuning parameter for each test problem using L16 orthogonal array.
The L16 orthogonal array is illustrated in Table 10.
The objective of parameter tuning for the multi-objective problem is to optimize the comparison metrics, i.e., spacing metric is
Table 8
Distribution functions of test problem parameters.
Class
I
II
III
Problem parameters
T
Q
H
U(10,50)
U(50,100)
U(100,200)
U(5,20)
U(20,50)
U(50,70)
U(3,5)
U(5,7)
U(7,10)
712
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
Table 10
The L16 orthogonal array.
Table 12
The values of (m)i,j .
Experiment number
Parameter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Parameter
C1
C2
N
Itr
w
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
2
1
4
3
3
4
1
2
4
3
2
1
1
2
3
4
3
4
1
2
4
3
2
1
2
1
4
3
1
2
3
4
4
3
2
1
2
1
4
3
3
4
1
2
minimized, diversification metric and number of Pareto solutions
are maximized. Taguchi design uses S/N ratio instead of mean value
to interpret the trial results data into a value for the evaluation characteristic in the optimum setting analysis, because S/N ratio can
reflect both mean and variation of the performance characteristics.
In this paper, Taguchi parameter design with the utility concept,
which has been introduced by Kumar et al. [40]), is applied. Here S
is minimized, D and the number of Pareto solutions are maximized.
Hence, “smaller the better type” characteristic for S and “larger the
better type” characteristic for D and the number of Pareto solutions
have been selected. The S/N ratio associated with the responses, S,
D and the numbers of Pareto solutions (P) are given as:
1 = −10 log10 [S 2 ]; 2 = −10 log10
1
1
; 3 = −10 log10
;
D2
P2
(36)
!
!
S/N ratio
C1
C2
N
Itr
w
Level 1
Level 2
Level 3
Level 4
11.05
8.80
8.00
7.22
7.87
7.62
8.62
8.65
9.80
9.42
7.7
7.42
8.05
9.22
8.70
7.70
9.30
9.45
7.90
8.15
and the multi-response S/N ratio in each trial for test problem I-1
in the orthogonal array are given in Table 11.
The effect of a parameter level i for parameter j is (Gaitonde et al.
[36]):
(m)i.j =
l
4
i=1
i=1
1
1
(i )j =
(i )j
4
l
(38)
The optimum level of a parameter is the level, which gives the
highest S/N ratio. The maximization of multi response S/N ratio for
the optimal level associated with each parameter is:
ji,opt = max{(m)i,j }
for j = c1 , c2 , N, Itr, w, i = 1, 2, 3, 4.
(39)
All values of (m)i,j for test problem 1 are summarized in Table 12
and the final PSO parameter tuning for this test problem is performed based on the values given in this table:
In this manner, the optimum level of parameters is 1, 4, 4, 2
and 2, respectively. Hence, the best combination values for the
parameters are: c1 = 0.5, c2 = 2, N = 200, Itr = 500, w = 0.2.
It should be noted that due to space limitation, the orthogonal
array, responses, S/N ratio and the overall S/N ratio for the other test
problems are not presented here but can be provided upon request.
Table 13 includes the final results of PSO parameter tuning for all
test problems.
6.5. Numerical results
In the utility concept, the multi-response S/N ratio is given by
(Kumar et al. [40]):
(37)
= w1 1 + w2 2 + w3 3
where w1 , w2 and w3 are the weighting factors associated with
S/N ratio for each of the responses S, D and P respectively. These
weighting factors are selected based on the priorities among the
responses to be optimized. In this paper, weighting factors of 1/3
for each of the responses is considered, which gives equal priorities
to S, D and P. The computed values of S/N ratio for each response
Prior to developing the MOPSO algorithm presented in this
paper, we first applied one of the most effective exact multiobjective approaches namely the augmented epsilon-constraint
method (AUGMECON) to solve the proposed bi-objective model.
The AUGMECON is an extended version of the well-known priori
method, i.e., the epsilon-constraint method proposed by Mavrotas
[26]. For our problem, from the initial numerical experiments we
observed that the AUGMECON was only able to solve the smallsized problem instances in a reasonable time. But due to high
Table 11
S/N ratio for each response and the multi response S/N ratio.
Exp no
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Level of parameters
Response
S/N ratio
S/N ratio
C1
C2
N
Itr
w
S
D
P
1
2
3
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
1
2
3
4
1
2
3
4
3
4
1
2
4
3
2
1
1
2
3
4
2
1
4
3
3
4
1
2
4
3
2
1
1
2
3
4
3
4
1
2
4
3
2
1
2
1
4
3
1
2
3
4
4
3
2
1
2
1
4
3
3
4
1
2
11.7
2.9
24.1
29.3
24.6
32.7
13.2
44.8
9.4
16.0
53.9
39.3
22.8
29.7
41.8
9.9
7.4
22.1
29.5
44.2
21.7
11.2
12.8
17.5
5.8
18.6
24.6
16.9
18.0
7.5
23.1
16.6
33.2
12.9
29.1
37.8
20.1
29.7
11.9
48.8
24.6
18.5
25.1
31.9
44.8
27.4
11.8
16.3
−21.3
−8.6
−27.6
−29.3
−27.8
−30.2
−22.4
−33.0
−19.4
−24.0
−34.6
−31.8
−27.1
−22.4
−32.4
−19.9
17.3
26.8
29.3
32.9
26.7
20.9
22.1
24.8
15.2
25.3
27.8
24.5
25.1
17.5
27.2
24.4
30.4
22.2
29.2
31.5
26.0
29.4
21.5
33.7
27.8
25.3
27.9
30.0
33.0
28.7
21.4
24.2
8.8
13.4
10.3
11.7
8.3
86.7
7.0
8.5
7.8
8.8
7.0
7.5
10.3
5.6
5.4
9.5
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
Table 13
MOPSO parameters of test problems.
Test problem
C1
C2
N
Itr
I-1
I-2
I-3
I-4
I-5
I-6
I-7
I-8
I-9
I-10
0.5
0.5
2
0.5
2
1.5
0.5
2
1.5
1.5
2
1.5
2
0.5
1
1.5
1.5
2
1
0.5
200
100
50
20
200
100
100
50
20
20
500
1000
500
200
200
1000
1000
500
200
500
0.2
0.3
0.4
0.2
0.2
0.4
0.2
0.1
0.2
0.4
II-1
II-2
II-3
II-4
II-5
II-6
II-7
II-8
II-9
II-10
0.5
2
1.5
1.5
2
0.5
1.5
2
0.5
0.5
2
2
1.5
1
1
2
0.5
1
0.5
2
100
20
100
50
20
100
200
50
50
100
200
200
500
2000
500
500
1000
1000
500
200
0.4
0.4
0.3
0.1
0.4
0.1
0.3
0.1
0.2
0.1
III-1
III-2
III-3
III-4
III-5
III-6
III-7
III-8
III-9
III-10
1
2
1.5
1.5
2
1
0.5
2
1.5
0.5
1.5
2
0.5
0.5
1.5
2
1
0.5
0.5
2
50
20
200
200
20
100
100
200
50
50
500
1000
200
200
200
500
200
500
2000
200
0.2
0.1
0.4
0.3
0.3
0.1
0.3
0.2
0.4
0.3
713
complexity of the concerned problem, the run time of AUGMECON was exponentially increased when the size of problem grew.
Thus, applying a multi-objective metaheuristic like MOPSO for
solving the medium-sized and large-sized problem instances was
inevitable. However, in order to show the effectiveness of the proposed MOPSO, both AUGMECON and MOPSO had been used to solve
all the test problems.
The AUGMECON was coded in GAMS and the CPLEX 7.5 solver
was used for solving the resulting single-objective models (on the
same platform). The number of efficient solutions in AUGMECON
was set in advance and here we set the model to generate 10 solutions for each test problem. Table 8 summarizes the comparative
results of MOPSO and AUGMECON.
As it can be observed from Table 14, the computational time
of AUGMECON is largely increased depending on the size of test
problems. However, the proposed MOPSO can obtain comparable
solutions in less CPU time. Notably, running of GAMS was stopped
after 2 h and the obtained solutions were reported as the final nondominated solutions. The importance of the proposed MOPSO is
more notable when GAMS could not find any solution within 2 h
(see test problems III-5 up to III-10), while the proposed MOPSO
could present a set of non-dominated solutions allowing the decision maker to make the final decision.
For the problem set class I, which is close in nature with a
real-life data set, the computational results show that the output
of optimization improves machine utilization on average by 16%
when compared with the results generated in the real-life by a
combination of heuristic approaches.
w
Table 14
Numerical results of test problmes.
Test problem
No of variables
Binary
I-1
I-2
I-3
I-4
I-5
I-6
I-7
I-8
I-9
I-10
II-1
II-2
II-3
II-4
II-5
II-6
II-7
II-8
II-9
II-10
III-1
III-2
III-3
III-4
III-5
III-6
III-7
III-8
III-9
III-10
*
No of constraints
Integer
Solution time (s)
D
PSO
PSO
AUGMECON
PSO
AUGMECON
PSO
684
957
984
1280
1276
1836
2058
2208
3344
3325
36
87
123
80
116
102
147
138
176
175
947
1393
1558
1739
1848
2379
2772
2895
4235
4239
5.8
6.6
7.4
9.3
11.1
15.7
18.4
20.0
36.8
37.1
44.9
48.5
84.3
223.1
330.7
986.9
1427.5
1602
2838.9
3032.1
24.3
18.9
46.8
4.7
41.6
51.5
52.5
42.6
25.4
28.4
10.6
14.5
27.8
13.7
18.6
36.4
17.9
26.9
17.1
52.4
25.7
7.0
16.7
13.6
24.1
20.8
12.8
11.4
22.3
12.5
13.9
24.3
22.3
17.7
22.2
18.1
19.8
18.5
19.8
14.8
24
11
7
36
22
5
17
21
6
30
10
10
10
10
10
10
10
10
10
10
11,088
11,700
12,383
13,260
15,120
16,650
17,520
17,787
18,228
22,878
528
260
427
510
432
370
438
539
588
588
13,527
13,231
14,543
15,684
17,327
18,643
19,812
20,467
21,037
25,682
101.9
105.1
127.4
136.9
178.8
194.9
219.6
254.9
294.8
381.8
7052.9
>7200
>7200
>7200
>7200
>7200
>7200
>7200
>7200
>7200
14.4
35.5
19.6
51.0
39.6
24.1
40.4
17.1
25.5
49.7
32.5
24.7
29.2
20.3
25.2
15.0
32.4
41.2
29.9
35.3
17.0
21.5
24.9
20.1
5.5
11.5
25.3
24.8
11.0
24.2
19.9
17.0
22.8
15.7
10.6
19.2
15.2
12.5
12.0
17.1
8
12
11
10
19
6
13
18
20
9
10
4
4
3
4
2
3
3
2
3
88,335
61,360
65,880
83,448
85,932
86,400
100,650
106,880
106,960
111,996
1359
1040
1080
1464
1386
1350
1650
1670
1910
1647
95,073
66,577
71,294
90,382
92,727
93,194
108,614
114,986
115,890
119,972
644.2
673.5
739.0
811.2
926.6
1007.8
1201.8
1365.9
1478.4
1743.7
>7200
>7200
>7200
>7200
>7200
>7200
>7200
>7200
>7200
>7200
37.4
14.4
45.1
34.8
10.5
14.1
33.7
34.8
22.1
32.1
N/A
22.6
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
2.5
10.1
22.1
8.2
12.6
6.2
6.1
16.6
8.0
6.1
N/A
25.8
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
11
7
21
14
22
12
6
5
11
18
After 2 h no feasible solution is found.
AUGMECON
No of efficient
solutions
S
AUGMECON
1
2
1
1
0*
0*
0*
0*
0*
0*
714
S.A. Torabi et al. / Journal of Manufacturing Systems 32 (2013) 700–714
7. Concluding remarks
In this paper, we propose a novel bi-objective mathematical
model to make the best decisions relating to the heads of multihead beam-type placement machines. Due to the importance of
the heads, optimization of their performance has a great effect on
the whole production process. In the present study, a new criterion,
namely appropriateness factor, is introduced to evaluate the compatibility of each pair of component type-nozzle. The selection of
an appropriate set of nozzles for handling the components on the
heads can let the robotic arm move faster. Hence the machine cycle
time can considerably be improved by adopting appropriate nozzles. Firstly, the original bi-objective model is solved using an exact
method, AUGMECON, but due to the complexity of the model we
propose a MOPSO algorithm to generate a set of efficient solutions
for medium and large-sized problems. The parameters of MOPSO,
which have a great impact on the performance of algorithm, are
tuned based on the Taguchi Method. Applying the Taguchi Method
reduces the number of experiments needed to parameter tuning
for each test problem from 1024(45 ) to 16 ones. There are potentially some opportunities for further research in the optimization of
Multi-head beam-type placement machines. For example, considering the synchronous nozzle exchanging on the heads can be used
to define new problems. Finding the optimal nozzle selection and
component allocation for the heads when more than one PCB types
are simultaneously processed in a multi-head beam-type placement machine, can be another field of research.
Acknowledgement
This study was supported by the University of Tehran under the
research grant no. 8109920/1/03. The authors are grateful for this
financial support.
References
[1] Hardas C, Doolen T, Jensen D. Development of a genetic algorithm for component placement sequence optimization in printed circuit board assembly.
Computers and Industrial Engineering 2008;55:165–82.
[2] Ayob M, Kendall G. A survey of surface mount device placement machine optimization: machine classification. European Journal of Operational Research
2008;186:893–914.
[3] Sun D-S, Lee T-E. A branch-and-price algorithm for placement routing
for a multi-head beam-type component placement tool. OR Spectrum
2008;30:515–34.
[4] Rogers P, Warrington R. Production planning for surface mount technology
lines. International Journal of Production Research 2004;42:2683–718.
[5] McGinnis L, Ammons J, Carlyle M, Cranmer L, Depuy G, Ellis K, Tovey C, Xu H.
Automated process planning for printed circuit card assembly. IIE Transactions
1992;24:18–30.
[6] Crama Y, Klundert J, Spieksma F-C-R. Production planning problems in printed
circuit board assembly. Discrete Applied Mathematics 2002;123:339–61.
[7] Lee W, Lee S, Lee B, Lee Y. An efficient planning algorithm for multi-head surface
mounting machines using a genetic algorithm. Journal of Universal Computer
Science 2000;5:833–54.
[8] Ball M-O, Magazine M-J. Sequence of insertions in printed circuit board assembly. Operations Research 1988;36:192–201.
[9] Leipala T, Nevalainen O. Optimization of the movements of a component placement machine. European Journal of Operational Research 1989;38:167–77.
[10] Khoo L, Loh K. A genetic algorithms enhanced planning systems for surface
mount PCB assembly. International Journal of Advanced Manufacturing Technology 2000;16:289–96.
[11] Ho W, Ji P. A genetic algorithm to optimize the component placement process
in PCB assembly. International Journal of Advanced Manufacturing Technology
2005;26:1397–401.
[12] Moon G. Efficient operation methods for a component placement machine
using the patterns on printed circuit boards. International Journal of Production
Research 2009, http://dx.doi.org/10.1080/00207540802553608.
[13] Van Laarhoven PJM, Zijm WHM. Production preparation and numerical control in PCB assembly. International Journal of Flexible Manufacturing Systems
1993;5:187–207.
[14] Magyar G, Johnson M, Nevalainen O. On solving single machine optimization problems in electronics assembly. Journal of Electronics Manufacturing
1999;9:249–67.
[15] Hong J, Lee W, Lee S, Lee B, Lee Y. An efficient production planning algorithm
for multi-head surface mounting machines using the biological immune algorithm. International Journal of Fuzzy Systems 2000;2:45–53.
[16] Jeevan K, Parthiban A, Seetharamu K-N, Azid IA, Quadir G-A. Optimization of
PCB component placement using genetic algorithms. Journal of Electronics
Manufacturing 2004;11:69–79.
[17] Grunow M, Gunther H-O, Schleusener M, Yilmaz I-O. Operations planning for
collect-and-place machines in PCB assembly. Computers and Industrial Engineering 2004;47:409–29.
[18] Sun D-S, Lee T-E, Kim K-H. Component allocation and feeder arrangement for a
dual-gantry multi-head surface mounting placement tool. International Journal
of Production Economics 2005;95:245–64.
[19] Raduly-Baka C, Knuutila T, Johnsson M, Nevalainen OS. Selecting the
nozzle assortment for a Gantry-type placement machine. OR Spectrum
2007;30:493–513.
[20] Kulak O, Yilmaz I-O, Gunther H-O. PCB assembly scheduling for collect-andplace machines using genetic algorithms. International Journal of Production
Research 2007;45:3949–69.
[21] Li S, Hu C, Tian F. Enhancing optimal feeder assignment of the multi-head
surface mounting machine using genetic algorithms. Applied Soft Computing
2008;8:522–9.
[22] Ashayeri J, Ma N, Sotirov R. An aggregated optimization model for multihead SMD placements. Computers and Industrial Engineering 2011;60:
99–105.
[23] Ayob M, Kendall G. The optimisation of the single surface mount device placement machine in printed circuit board assembly: a survey. International Journal
of Systems Science 2009;40:553–69.
[24] Yu G, Kouvelis P. Complexity results for a class of min-max problems with
robust optimization applications. In: Paradalos PM, editor. Complexity in
numerical optimization. Singapore: World Scientific; 1993. p. 501–11.
[25] Miettinen K. Nonlinear multi-objective optimization. Boston: Kluwer Academic
Publishers; 1999.
[26] Mavrotas G. Effective implementation of the ε-constraint method in MultiObjective Mathematical Programming problems. Applied Mathematics and
Computation 2009;213:455–65.
[27] Eberhart RC, Kennedy J. A new optimizer using particle swarm theory. In:
Proceedings of the sixth international symposium on micro machine and
human science. 1995. p. 39–43.
[28] Coello CAC, Pulido GT, Lechuga MS. Handling multiple objectives with particle swarm optimization. IEEE Transactions on Evolutionary Computation
2004;8:256–79.
[29] Engelbrecht AP. Fundamentals of computational swarm intelligence. John
Wiley & Sons; 2005.
[30] Shi Y, Eberhart R. A modified particle swarm optimizer. In: Proceedings on the
IEEE congress on evolutionary computation. 1998. p. 69–173.
[31] Moore J, Chapman R. Application of particle swarm to multi-objective optimization. Department of Computer Science and Software Engineering, Auburn
University; 1999.
[32] Mostaghim S, Teich J. Strategies for finding good local guides in multiobjective particle swarm optimization (MOPSO). In: Proc. IEEE swarm intelligence
symposium. 2003. p. 26–33.
[33] Jiang Y, Hu T, Huang C, Wu X. An improved particle swarm optimization algorithm. Applied Mathematics and Computation 2007;193:231–9.
[34] Niu B, Zhu Y, He X, Wu H. MCPSO: a multi-swarm cooperative particle swarm
optimizer. Applied Mathematics and Computation 2007;185:1050–62.
[35] Kennedy J, Eberhart RC. A discrete binary version of the particle swarm
algorithm. In: Proceedings of the 1997 conference on systems, man, and cybernetics. 1997. p. 4104–9.
[36] Rahimi-Vahed AR, Rabbani M, Tavakkoli-Moghaddam R, Torabi SA, Jolai F. A
multi-objective scatter search for a mixed-model assembly line sequencing
problem. Advanced Engineering Informatics 2007;21:85–99.
[37] Gatitonde VN, Kamik SR, Davim J, Paulo. Multiperformance optimization
in turning of free-machining steel using Taguchi method and utility concept. Journal of Materials Engineering and Performance 2009;18:
231–6.
[38] Maghsoodloo S, Ozdemir G. Strengths and limitations of Taguchi’s contributions to quality, manufacturing, and process engineering. Journal of
Manufacturing Systems 2004;23:73–126.
[39] Ross PJ. Taguchi techniques for quality engineering. New York: McGraw-Hill;
1996.
[40] Kumar P, Barua PB, Gaindhar JL. Quality optimization (multi-characteristic)
through Taguchi’s technique and utility concept. International Journal of Quality and Reliability Engineering 2000;16:475–85.