Basel II established three interrelated, complementary and mutually reinforcing pillars designed to manage
operational risk arising from credit institutions’ activities. This risk is difficult to identify and treat and is
also difficult to calculate the actual bank's exposure to operational risk.
First Pillar - Tier I
The first pillar (capital requirements) deals with the conservation of the regulatory framework of equity
calculated on the risks faced by a bank, such as credit, operational and market risk.
Credit risk can be calculated in different ways by different degrees of complexity through the Foundation
IRB (Internal Ratings-Based), Advanced IRB and General Restriction IRB.
For the calculation of operational risk three methods exist:
Basic Indicator Approach
Standardized Approach
Advanced Measurement Approach
As for market risk, the preferred approach is VaR (Value at Risk).
Second Pillar - Tier II
The second pillar (supervisory review process) deals with the regulatory framework of the first pillar, giving
regulators improvements on regulations provided by Basel I. Moreover, it provides regulations and measures
to address other risks such as systemic risk, concentration risk, strategic risk, reputation risk, liquidity risk
and legal risk.
The result of the second pillar of Basel II was the creation of the Internal Capital Adequacy Assessment
Process (ICAAP).
Third Pillar - Tier III
The third pillar (market discipline) aims to raise the minimum capital requirements and the supervisory
review process which will allow the market to calculate the capital adequacy of financial institutions. For
market discipline to the regulations the exchange of information among banks is required, not only by the
banks themselves but also by investors, financial analysts, competitive banks and rating agencies. In this
way, it is required from banks to disclose details of their financial exposures, risk assessment processes and
capital adequacy ratios. Such disclosures should be made at least twice a year, except from the provision of
qualitative disclosures (a summary of the general risk management objectives and policies) that can be made
on an annual basis.
The three pillars of Basel II are not independent of each other, but on the contrary are very closely linked
and mutually reinforcing. The correct application of the rules of the first pillar, regarding minimum capital
requirements, requires the ability of the supervisors to carry out inspections of these rules through the
potentialities provided by the second pillar. Moreover, the increased requirements for publication of data
and information relating to the risk management of financial institutions, which are provided by the third
pillar, form the right incentives for improvement of the risk management processes they utilize.
In addition, new rules were introduced that relate to the weight coverage of securitized assets as well as
current liabilities that are subject to capital requirements. Apart from the changes made to the basic
methodology new regulations were introduced that are associated with the prevention and measurement of
credit risk. Through these revisions banks are enabled to develop internal systems of credit risk assessment
at various levels of complexity so as to achieve more accurate risk weight upon supervisory approval.
1.4 Basel II Shortcomings
In the aftermath of the catastrophic effects of the current financial crisis and the consequential global
recession, the authorities were motivated to review the international regulatory framework for the banking
system, a process that led to the creation of Basel II. The new agreements, developed by the Basel
Committee, address the whole range of regulatory and supervisory issues, including liquidity standards,
credit risk, operational risk, market risk and the accounting principles. However, the main feature of these
regulations is that banks must comply with a minimum required Tier 1 and capital adequacy ratio, on the
risk weighted assets, of 4%. The objectives of this capital requirement are to absorb unexpected losses, but
the financial crisis showed, in the most harmful way, that the expectations weren’t met.
The crisis highlighted a number of serious shortcomings of Basel II:
The capital adequacy ratio of 4% was insufficient to offset the huge losses that banks suffered
251
The responsibility for assessing the risk of the counterparty is assigned to the credit rating agencies
(S&P, Fitch, Moody’s), which have proved to be vulnerable to potential conflicts of interest.
Capital requirement is pro-cyclical: when the global economy is growing and there is a rise in asset
prices, then the risks of counterparty and country tend to be reduced, so the capital requirement is
respectively lower. However, in case of a recession, we have the opposite effects, as banks’ capital
requirements increase and greater loan retention is needed.
Basel II provides incentives for greater use of the process of securitization. This happens when the
financial institutions re-package loans into asset-backed securities and then move them off their
balance sheets, so as to reduce their assets’ risk weighting. As a result, this process allowed many
banks to reduce their capital requirements and to take risks, while increasing their leverage.
1.5 The liquidity risk factor in Basel II
The purpose of Basel II is to provide regulatory improvements in comparison to the requirements of Basel I.
One of the improvements is the reference to the term “liquidity risk”, which is not foreseen in the provisions
of the previous supervisory framework.
The term liquidity risk means the potential inability of banks to cope with immediate obligations when they
become due, without incurring excessive costs. Liquidity risk arises from the very nature of banking
activities, as part of their intermediary operation, where banks convert operating current liabilities (e.g.
deposits) into long term assets (e.g. loans). Liquidity risk may occur from either the side of the liabilities, as
failure to renew maturing liabilities and non-expected deposits withdrawal, or from the side of assets, as
failure to liquidate assets and higher than expected use of approved credits from customers. The aftermath of
the 9/11 attacks and the 2007-2008 global credit crisis are two relatively recent examples of times when
liquidity risk rose to abnormally high levels. Thus, liquidity risk was banks’ Achilles heel.73
Rising liquidity risk often becomes a self-fulfilling prophecy, since panicky investors try to sell their
holdings at any price, causing widening bid-ask spreads and large price declines, which further contribute to
market illiquidity and so on.
The literature identifies two dimensions of liquidity risk:
The Funding Liquidity Risk which refers to the inability of a bank to find sufficient resources to fund
its obligations.
The Market Liquidity Risk which refers to the impossibility of immediate liquidation of an
investment position without significant impact on its market value.
The effective liquidity risk management is crucial for ensuring the proper functioning of a bank, as the lack
of liquidity may threaten its own viability. For example, the lack of confidence of depositors can lead to
sudden withdrawals (Bank Run) and the bank's failure to continue its normal operation. It is noted that
liquidity problems can occur even when the banks have sufficient capital adequacy.
The main determinants of liquidity risk are as follows:
The availability of cash and highly liquid assets, which allows the bank to cover unpredictable cash
needs.
The maturity mismatch among receivables - liabilities, i.e. how more long term are the requirements
of the bank in relation to its obligations and end
The structure and the concentration level of the bank's funding sources.
The stock of cash and highly liquid assets banks have mitigate liquidity risk they face due to the fact that, if
necessary, banks may sell part of this stock to cover short-term cash needs. Instantly liquid assets are usually
considered the government and corporate bonds, shares of companies that are traded on regulated markets
and exchange-traded funds (Exchange Traded Funds - ETF's).
73 An Achilles heel (metaphor) is a deadly weakness in spite of overall strength, which can actually or potentially lead to
downfall. In Greek mythology, when Achilles was a baby, it was foretold that he would die young. To prevent his death, his
mother Thetis took Achilles to the River Styx, which was supposed to offer powers of invulnerability, and dipped his body into
the water. But as Thetis held Achilles by the heel, his heel was not washed over by the water of the magical river, making it the
only vulnerable part of his body.
252
As noted above, the liquidity risk is part of the intermediary operation of banks and cannot be erased.
The higher the level of the maturity mismatch among accounts receivable-liabilities, the greater the liquidity
risk, while the possibility of the inflow of resources not to be sufficient to cover the outflows in the future
increases.
Factors closely related to banks’ liquidity risk are also the structure and the concentration level of funding
sources. In the traditional banking model, customer deposits are the primary source of funding for the banks’
activities while the interbank market, as well as the financial markets (capital and money), have usually a
subsidiary role. This synthesis of funding sources has the advantage that deposits are a relatively stable and
low cost source of capital however, the relatively long period of low interest rates in the recent year before
the burst of the financial crisis, prompted investors to focus on the search for higher yields through
alternative investments, a development that made it more difficult for banks to maintain its deposit base.
Furthermore, the big increase in demand for loans by the private sector encouraged the development of
alternative sources of funding beyond deposits, such as the interbank market and the issuance of securities
(e.g. senior debt and subordinated debt). But the key factor that opened the Pandora’s Box for banks was the
wide application of the practice of credit claims’ creation and then securitization and distribution of them to
the market.
Essentially, there was a major shift from the traditional “originate and hold” to the ‘originate and distribute’
banking model. So, as became clear during the recent crisis, the over-reliance of credit and other financial
institutions for liquidity on the money markets, and through loans’ securitization, resulted in the dramatic
rise of liquidity risk with all the known consequences.
1.6 Liquidity Risk Management under Basel II framework
From all the above it is more than obvious that an active and effective management of liquidity risk is
required. Policies that help to reduce liquidity risk are:
Holding liquidity reserves in the form of readily marketable assets.
Diversification of liquidity sources, i.e. avoiding dependence on a particular market or product.
Balanced Structure of Financing as to maturity (long–short term), currencies (domestic, foreign) and
sources (retail deposits, interbank, credit titles, etc.) depending on the institutions’ needs.
Setting of internal boundaries in order to restrict and control liquidity risk.
The recent global financial crisis was and remains till now one of the most “hot” issues. Much of the
criticism was directed at Basel II for not being able to prevent and properly handle the whole situation.
The main reasons were:
a) It promoted the use of complex financial products (derivatives etc) and guarantees as part of decreasing
credit risk, but simultaneously brought the dealing with investment vehicles that were characterized by
lack of transparency, with the most representative example been securitization.
b) The minimum regulatory capital proved insufficient to deal with losses due to high leverage and reduced
ability to absorb losses.
c) It promoted excessive competition between credit institutions where the return on investment was
evaluated regarding the pledging funds (use of RAROC-Risk Adjusted Return on Capital).
d) amplified the usage of creative accounting and techniques of covering up credit risk (Risk Weighted
Assets).
e) didn’t require from credit institutions to improve their minimum capital adequacy ratios by increasing
capital and reducing risk-weighted assets, where in economic recession times leads to selling under
pressure (fire sales), intensifying the pro-cyclical nature of the system.
f) didn’t take into account the intrinsic process of risk foundation. Internal models used by credit
institutions based on the assumption that credit and operational risk are an exogenous process that can be
quantified. This finding does not take into consideration the influence on risk development by those
making the prediction. Something that applies to periods of stability where the heterogeneous reactions
and predictions of economic factors cancel out. In contrast, during turbulent periods, emergent
predictions and reactions show a high degree of homogeneity. This assumption implies that the risk
assessment process is endogenous and therefore it becomes very difficult to be quantified.
1.7 The actions of the supervising authorities in response to the recent credit crunch
253
The current crisis of the international financial system proves that the prudential rules and the implemented
policies to date were apparently insufficient to prevent the collapse of large financial groups that led to a
deep economic recession. Therefore, new improved mechanisms were required to ensure financial stability
and efficiency. It was actually an urgent call for a new strategy to support and radically reform the (not only)
European financial system. So in order Basel Committee to be able to address system weaknesses revealed
by the recent recession introduced a series of changes in the international regulatory framework. The Basel
III treaty includes a set of proposed changes to international rules on banks’ capital adequacy and liquidity
as well as certain matters relating to banking supervision. These changes move in micro prudential level
(leverage ratio, liquidity ratios) that is aimed at strengthening the resilience (capital requirements) of
individual banks in times of tension. It should be noted that a higher amount of capital implies greater ability
to absorb losses, which automatically means that banks can withstand deeper recession periods. At the same
time move in macro prudential level (prophylaxis from systemic risk) and focus on addressing systemic risk
and also pro-cyclicality, as these risks occur usually over time. It is obvious that these two approaches,
namely the micro and macro-prudential, are interconnected, as higher elasticity of capital adequacy ratio at
bank level reduces the risk of a probable systemic crisis and strengthens the risk coverage of the capital
requirements.
2. BASEL III & CAPITAL ADEQUACY
The application of Basel III framework from early 2013 and gradually over six years period, aims at a
significant increase in the content of the provisions of the previously existing regulatory framework of Basel
II, and the strengthening of the international banking system’s stability. This is probably the most important
initiative of the Commission following the recent financial crisis.
2.1 The Institutional Framework of Basel III
Since 2009, BCBS has issued a series of advisory documents to revise the existing guidelines on the banking
sector. The proposed regulations have been the subject of extensive debate among central bankers and
various experts, helping to create a new set of rules that shall address the shortcomings of Basel II. In
November 2010, at the summit of the G-20 in Seoul, the final text of Basel III was approved.
Briefly, the proposed new rules contain the following:
The capital ratio Tier 1 increases from 4% to 6%.
The required analogy of equity to risk weighted assets increases from 2% to 4.5%.
Upon Basel III, the excess capital by risk-weighted assets (RWA) is considered as a reference point,
substituting capital ratio Tier 1.
The new rules introduce a new threshold for the capital reserve (buffer) which should be more than 2.5%
and must consist of common stock. In stress periods (when the capital adequacy ratio of banks falls
below 7%), financial institutions are authorized to use that excess capital by cutting the distribution of
dividends or bonuses.
These measures are expected to solve the problems of Basel II, according to which capital requirements
were insufficient to cover large amounts of losses.
The Basel Committee also proposes the establishment of a countercyclical capital surplus, which will be
between 0% and 2.5% and will apply only to periods of excessive credit growth (based on the discretion of
the national regulatory authorities). The purpose of this rule is to correct the pro-cyclicality of Basel II,
particularly in periods of economic growth.
The main changes are depicted in the next figure:
254
Figure 2: BASEL III compared to BASEL II
Additionally, the proposed regulations aim to strengthen this system by introducing a leverage ratio of 3%,
that in each case, the ratio of capital to total assets should be above this minimum limit.
Finally, the big banks will have to comply with higher capital requirements, which have not yet been
determined. All these settings for capital requirements shall help strengthening the stability of the financial
sector.
Nevertheless, from this reform certain risks arise.
Firstly, the timing of the application of these regulations is relatively loose, so as to avoid any negative
impact on credit conditions. Most regulations will be implemented gradually between the period 2013-2019,
leaving enough time for national regulatory authorities and more financial institutions to prepare for higher
capital requirements, without significantly affecting the level of borrowing. Thus, the proposed final
structure of the enhanced regulatory capital as a percentage of risk-weighted assets, provided by the Basel
III accord, is as follows:
TABLE 1: Capital requirements
COMMON EQUITY
CAPITAL Basel II Basel III
Minimum 2% 4,50%
Stabilizing 0% 2,50%
Total Required 2% 7%
TIER I CAPITAL
Minimum 4% 6%
Total Required 8,50%
TOTAL CAPITAL
Minimum 8% 8%
Total Required 10,50%
The rates of the above categories of funds should always exceed some minimum standards arising from the
risk-weighted assets. However, there is a risk that the implementation of Basel III would require some
overleveraged and small banks to restrict credit supply, at least temporarily. In particular, this is likely to
create tighter credit conditions for small and medium-sized enterprises (SMEs) as well as for start-ups.
255
2.2 Basel III Capital Requirements
After a long period of favorable economic conditions, the international financial system had to deal with two
shocks, which occurred in rapid succession, the crisis of mortgage loans to borrowers of low credit (sub-
primes) and a significant increase in commodity prices. The two events, although disconnected, created
pressures in the international financial system, in terms of financial stability and inflation. Central banks
reacted by draining liquidity into the system and by reducing interest rates at unusually low levels.
The global financial crisis occurred at the beginning of the implementation of a major effort to amend the
prudential framework for capital requirements of Basel II. In this sense, Basel II couldn’t be tested for its
ability to mitigate the effects of crisis. At the same time, however, the lessons from the crisis highlighted
some obvious weaknesses of Basel II and in international forums the debate on the future shape of banking
supervision had already started, concerning the necessary modifications to the Basel II. It is clear that Basel
II underestimated some significant risks and generally overestimated the ability of banks to manage their
risks effectively.
Whatever form the supervision takes in the foreseeable future, it is certain that the large and complex
banking organizations (the so called “systemically important”) will have stricter prudential requirements,
restrictions on forms of risk taking, better corporate governance and more (and of better quality) funds. The
latter was the subject of a notice of the Basel Committee on 12/9/2010 related to the new capital adequacy
framework.
Figure 3: New minimum capital requirements proposals
As seen in the above Figure, the new framework introduces a new level (minimum 4.5%) of capital, with
high absorbing capacity. The Tier I ratio also increases to 6% while the capital adequacy ratio remains at
8%. Further, it was agreed a conversation buffer, above the minimum of 2.5%, to be covered by common
equity. The purpose of the conversation buffer is to ensure that banks maintain a capital "cushion" that can
be used to absorb losses during periods of financial crisis. It also provided a counter-cyclical buffer between
0% - 2.5% of the share capital to be implemented at national level. The objective of the additional "cushion"
funds is to achieve the broader macro-prudential goal of creating a “safety net” against risk concentration at
systemic level. Finally, it should be noted that a transitional period of eight years exists and the above
recommendations will be implemented fully on 01.01.2019.
2.3 Proposals of the Basel III Framework concerning liquidity risk
The new banking regulatory framework established for the first time at international level the two following
liquidity ratios:
256
Table 3: Liquidity ratios according to Basel III regulatory framework
1. LCR is designed to ensure that financial institutions have the necessary assets on hand to ride out
short-term liquidity disruptions. Banks are required to hold an amount of highly-liquid assets, such
as cash or Treasury bonds, equal to or greater than their net cash over a 30 day period (having at least
100% coverage). The liquidity coverage ratio started to be regulated and measured in 2011, but the
full 100% minimum won't be enforced until 2015.
2. NSFR compares the amount of a firm’s available stable funding (ASF, the ratio’s numerator) to its
required stable funding (RSF, the ratio’s denominator) to measure how the firm’s asset base is
funded. This indicator aims to encourage the use of longer-term financing from banks. The reason
behind this is the fact that during the financial crisis of 2007–2008, several banks, including the U.S.
investment banks Bear Stearns and Lehman Brothers, suffered a liquidity crisis, due to their over-
reliance on short-term wholesale funding from the interbank lending market.
Apart from these indicators, a new index is introduced, Leverage Ratio, which won’t be adjusted to the risk
(Non-Risk Based Leverage Ratio).
The index is calculated as follows:
Leverage Ratio= Tier I Capital ≥3%
Total Assets
The introduction and use of the leverage ratio aimed at two objectives, namely to establish a lower lending
limit in the banking sector and to introduce additional security features used in the already used models for
measuring risk and errors. Leverage is used in a way that is comparable among the various national
legislations with adjustments of accounting standards for normalization of existing disputes. In the above
context, the leverage ratio is the fraction of equity to total non-weighted assets. The main disadvantage is
that as much capital is required as a guarantee for a bond of the US government (considered risk free) is
equally required for a high risk loan.
The main regulatory initiatives of Basel III are based in modifications to the provisions of the current
framework governing the capital adequacy of international banks (Basel II) that were one of the main causes
of the crisis. The leverage of the banking system was the product of the current regulatory framework, as
banks in order to reduce the cost of its implementation, resorted widely to "regulatory capital arbitrage”
techniques, mainly with the use of excessive securitization. Consequently, new provisions were imposed
concerning minimum bank equity and banks’ coverage over their exposure to credit risk.
Moreover, new rules of micro-prudential regulatory intervention were introduced, due to the fact that the
majority of the banks didn’t have any liquidity reserves:
a) Enforcement of leverage ratio.
b) Imposition of liquidity ratios (Short-term LCR & Long NSFR).
257
c) Tools for monitoring liquidity risk by the supervising authorities.
Additionally, as there were no tools for the prevention of systemic risk in the financial system, new rules of
macro-prudential regulatory intervention were imposed such as:
Capital Conversation Buffer for maintenance purposes
Countercyclical Capital Buffer in exceptional cases.
It is expected that the aforementioned new rules will affect the functioning of banks, but also the overall
economic activity due to the associated costs of implementation.
From all the above we come to the conclusion that the three key elements of the new regulatory framework
are the slightly higher capital ratios, better capital quality and stricter liquidity requirements.
It is clear that banks’ capital requirements will increase dramatically (especially in times of economic
growth), while the application of the provisions on liquidity ratios will lead in some cases to a whole
redefinition of their business model. Therefore, it is obvious that the profit margins of banks in the new
environment will be reduced significantly, as well as their return on equity. And this can’t be avoided
whatever is the ability to pass on the cost to the recipients of services and whatever the capability to cut
operating costs.
In general, regarding the regulation of liquidity, the economic activity will also be affected by an increase in
the cost of banking intermediation. In addition, if the required return on equity and the cost of bank lending
are not adjusted to the new framework, then the banks will increase lending spreads in order to offset for the
higher cost of financing.
2.4 New provisions concerning banking capital adequacy
The main changes that the new regulatory framework brought about are summarized to the following:
Provisions on minimum banking capital equity
As mentioned in the previous sections, the most important amendment to the current regulatory framework
of the Basel Committee is related to the capital adequacy of banks and the definition of their regulatory
equity capital.
Provisions for the protection of banks towards their exposure to credit risk
During the recent crisis some banks suffered significant losses from exposures for which no capital
adequacy rules were established. For this reason, the new framework seeks to enhance the coverage of banks
towards their exposure to credit risk from elements of their portfolio (on and off-balance sheet) such as OTC
derivatives, sale and repurchase agreements, loans for the purchase of securities and positions in derivatives
etc.
Furthermore, new provisions were introduced regarding the following:
- When calculating the capital requirements to cover against credit risk, under the standardized approach,
banks are required to assess the credit risk of their exposures, regardless of whether there exists or not a
credit assessment by the Credit Rating Agencies (CRA) and also to check whether the weights applied to
such exposures are adequate or not.
- In order to identify as "eligible" a Credit Rating Agency, national supervisory authorities should monitor
on an ongoing basis if it meets the relevant criteria, taking as a reference the revised Code of IOSCO
(International Organization of Securities Commissions) of 2008 for Credit Rating Agencies.
2.5 Underlying risks by the implementation of the new rules
As perceived, the new banking environment will be strongly altered by the implementation of the Basel III
rules. And this is supported by the fact that Basel III is just one of the measures of regulatory intervention in
the functioning of banks that are currently underway, in accordance with the above.
This is of course the price of the necessity of equipping the stability of the banking system at global level,
against the possibility of the occurrence of a major new financial crisis like the recent one. Even if it is
correct the assertion that the new environment is characterized by trends of overregulation, the experience of
the recent recession makes the new measures necessary and politically justifiable.
In this context, however, the following major risks arise, the importance of which should not be
underestimated:
a) First of all, the application of the new rules might, at least in some cases, lead to a reduction in the credit
supply provided by banks, with negative effects on the real sector of the economy and growth.
Therefore, it is critical to have accurate and reliable estimates regarding the anticipated impact of the
258
above factors on the lending activity of banks (particularly the smaller ones and those specialized in
mortgage and consumer loans), both in times of economic growth and recession.
b) Additionally, since the whole of the banking system will have to draw from the markets huge amounts of
capital (even at a six year period), predominantly by the issuance of common shares, the expected lower
returns on equity of banks will bring them in a competitive disadvantage compared to companies in
other sectors of the economy, the capital return of which will remain constant or will tend to increase.
At this point, it should be noted that the systemically important banks may be imposed to one additional
(beyond the above) capital requirement of 2% of risk-weighted assets and of off-balance sheet data, which
should also be covered by their own equity. This implies that the share capital of the large international
banks may (in extremis) octuple during the next few years.
Consequently, many banks which won’t be able to make the necessary capital raising from the markets, will
be forced to comply with the requirements of the new regulatory framework by taking the following actions:
Deleveraging, shrinking in this case their lending activity, and/or
Restructuring that will lead to greater concentration in the banking sector, without being obvious
what will be the positive synergies of it.
Finally, the need for cost reduction, provoked by the new regulatory framework, may lead to:
a) A new round of "regulatory arbitrage", mainly by shifting activities in parts of the financial
system that will continue to not operate under regulatory oversight and intervention (the so-called
«Shadow Banking System», which as mentioned above was one of the major causes of the
crisis), or in countries with loose regulatory and supervisory framework and
b) Financial innovations which may expose banks to risks that haven’t yet been identified.
This makes it necessary for the role of the supervisory authorities to be upgraded so as to be able to monitor
the ongoing developments taking place and submit their proposals for appropriate adjustment of the
regulatory framework in time.
3. Concluding remarks & future implications
Unlike Basel I and Basel II recommendations, Basel III prudential rules are mainly issued in response to the
recent global financial crisis that was even worse compared to the previous similar ones. A number of
enhancements have been proposed to strengthen the Basel accord. New capital, leverage and liquidity
requirements have been proposed by the BCBS to enhance regulation, supervision and risk management of
the banking sector. Basel III, the modified accord, strengthens the three pillars of Basel II, particularly Pillar
1 with enhanced minimum capital and liquidity requirements.
The key changes are summarized to the following:
The modified capital standards and the new capital buffers will require banks to hold more capital
and higher quality of capital. Moreover, the new leverage and liquidity ratios propose a non-risk
based measure to supplement the risk-based minimum capital requirements to ensure that adequate
funding is maintained during stress periods. Additionally, the capital required in respect of
counterparty credit risk and market.
The main focus of the Basel accord modifications is to improve the loss-absorption capacity of
banking institutions through stronger capital. Funding and liquidity requirements are intended to
further enhance the short- and long-term stability of the financial system.
To enhance the capital quality of banks, the modified rules require that common equity form a larger
core component of bank’s capital. The minimum ratio of common equity to risk-weighted assets will
go up to 4.5 percent in 2015 from the currently requirement of 2 percent. The qualifying elements
within Tier 1 capital are the common shares, minority interests, and retained earnings and, from
January 2013, other instruments not meeting the criteria for inclusion in common equity will be
excluded and will be phased out over a 10-year horizon.
Two types of capital buffers will be required under Basel III: the Capital Conservation Buffer and
the Countercyclical Capital Buffer. Both will be added on top of common equity requirements.
Capital Conservation Buffer will be phased in, starting at 0.6 percent of RWAs in 2016 and reaching
2.5 percent of RWAs in 2019. It will need to be met entirely with common equity. The purpose of
this additional capital is to avoid breaching the minimum capital requirements, particularly in periods
259
of stress. Local supervisors will decide whether and, if so, for how long local institutions can operate
within the buffer range. Similarly, the countercyclical capital buffer will be added on top of the
common equity requirements. However, it can be met with both common equity and other fully-
absorbing capital. It could be as low as zero, but it could reach up to 2.5 percent of RWAs in periods
of excessive growth. Together with the capital conservation buffer, the required amount of fully-
absorbing capital could reach up to 9.5 percent of RWAs.
Additional Tier 1 capital of 1.5 percent of RWAs and Tier 2 capital of 2 percent of RWAs are
required under Basel III, which increase the total capital requirements to
10.5 percent by 2019. Both of these requirements can be met using either common equity or
financial instruments with loss-absorption features and without any incentive to redeem. However,
under Basel III minimum requirements, capital instruments that do not meet the requirements for
inclusion in Tier 1 common equity capital will be excluded as from January 1, 2013, unless certain
conditions are met, in which case they are proposed to be phased out over the following 10 years.
Only instruments issued before 12 September 2010 would qualify for the proposed transition
arrangements. Other existing public sector capital injections would be grandfathered until January 1,
2018.
Under Basel III, a leverage ratio will be reported to supervisors starting in 2013, with disclosure after
2015 and with an objective of making it part of Pillar 1 capital requirements in 2018. It is calculated
as the ratio of Tier 1 capital to the total unweighted assets, including some off-balance sheet assets.
Banks would be required to maintain a leverage ratio of 3 percent or more. The unweighted assets
include provisions, loans, off-balance sheet items with full conversion, and all derivatives. The main
purpose of this ratio is to constrain leverage in the banking sector, while also helping to safeguard
against model risk and measurement errors.
In order to reduce liquidity risk, Basel III rules introduce two new measures, which are the Liquidity
Coverage Ratio (LCR) and the Net Stable Funding Ratio (NSFR).
So after the full implementation of the regulatory framework set by Basel III the banking system
shall be well shielded against any kind of risk and prepared properly to face any threat. On the
contrary, as analyzed before, a series of major risks arise by the application of the new rules with the
major ones been the economic impact on real economy and the existence of the “shadow banking
system”, which is a constant threat to the global financial system.
As a result, a number of serious questions come to light:
What future holds for the global banking system under the newly established regulatory framework?
Can the bank stability also bring financial stability?
What is the economic cost provoked by the new regulations?
In what level is real economy affected?
Which are the hidden requirements that lie beneath these rules?
Will the prudential rules affect bank credit policies?
What will be the interaction between regulated and unregulated sectors of the economy?
Can financial innovations pass by the regulations?
It is clear that the broad application of the new rules of Basel III will produce a more stable banking system
by the implementation of stricter capital adequacy ratios that aim to ensure liquidity in times of financial
distress. So it’s in the authorities’ hands to monitor the financial and banking conditions and intervene to
the extent that every time is needed so as to ensure the regulatory rules are followed and guarantee the
required capital structure avoiding liquidity traps (Vousinas G. 2013).
In conclusion, it is of critical importance for the global financial system to confront to the same rules
through common target policies of the central banks from one side and the regulators from the other. The top
economies of the world currently have different policies regarding the objectives of their regulation systems
due to many reasons, but the financial crisis of our times has underlined the “chain effects” of shocks and
proved that combined actions should be undertaken to prevent such situations. Thus, for the economic
system to work properly and with the minimum risks, the newly established rules must be strictly followed
in such a way that doesn’t affect real economy’s financing channels as well as flexibility of banking
260
business plans. And to do justice with the truth, if Basel III succeeds in this tough task it will be Heracles’
13th labour.
References
Albertazzi U., Eramo G., Gambacorta L., and Salleo C., (2011): «Securitization is not that evil after all
bis». Working Papers March, No 341
Aliaga-Díaz Roger and Olivero María Pía (2012). Do Bank Capital requirements amplify business cycles?
The gap between theory and empirics, Macroeconomic Dynamics, 16, pp 358-395.
Angelini P., Clerc L., Cúrdia V., Gambacorta L., Gerali A., Locarno A., Motto R., Roeger W., Heuvel Van
den S., Vlček J., (2011): «BASEL III: Long-Term Impact on Economic Performance and Fluctuations».
Staff Report February, no. 485
Atkinson P., Blundell-Wignall A. (2010), “Thinking beyond Basel III – Necessary solutions for capital and
liquidity”, OECD Market Trends No. 98 Volume 2010/1
Basel Committee on Banking Supervision (December 2010): «Basel III: A global regulatory framework
for more resilient banks and banking systems»
BCBS (2010), “Basel III: A global regulatory framework for more resilient banks and banking systems”,
December (rev. June 2011)
Catarineu-Rabell E., Jackson P., and Tsomocos D., (2002): «Procyclicality and the New Basel Accord -
banks’ choice of loan rating system», paper presented at a conference on the impact of economic
slowdowns on financial institutions and their regulators, Federal Reserve Bank of Boston, 17-19 April
Caruana J., (2010): «Basel III: towards a safer financial system» speech by Mr. Jaime Caruana, General
Manager of the Bank for International Settlements, at the 3rd Santander International Banking Conference,
Madrid, 15 September
International Organization of Securities Commissions (IOSCO), Technical Committee (2008): Code of
Conduct Fundamentals for Credit Rating
Jordan J., Peek J., and Rosengren E., (2002): «Credit risk modeling and the cyclicality of Capital», Federal
Reserve Bank of Boston, paper prepared for a conference on hanges in risk through time: measurement and
policy options, BIS, Basel, March
Kashyap Anil K., and Stein Jeremy C.,(2003) «Cyclical Implications of the Basel -II Capital Standards»
Tingting Gu, (2011) “Procyclicality of the Basel II Credit Risk Measurements and the Improvements in
Basel III”, Aarhus School of Business, Aarhus University
Repullo R., and Suarez J., (2008): «The Procyclical Effects of Basel II»
Siegfried Utzig, (2010): The financial crisis and the regulation of credit rating agencies: A European
banking perspective, ADBI working paper series, No. 188
Sorensen C.K., & Gutierrez J.M., (2006): «Euro area banking sector integration: Using hierarchical cluster
analysis techniques». ECB working paper no. 627
Segoviano M. A., and Lowe P., (2002): «Internal ratings, the business cycle and capital requirements:
some evidence from an emerging market economy», paper presented at a conference on The impact of
economic slowdowns on financial institutions and their regulators, Federal Reserve Bank of Boston, 17-19
April.
Vousinas G., "The Transmission Channels between Financial Sector and Real Economy in Light of the
Current Financial Crisis: a Critical Survey of the Literature," Modern Economy, Vol. 4 No. 4, 2013, pp.
248-256.
261
DIVIDEND PAYOUT AND CORPORATE GOVERNANCE ACROSS THE GREEK LISTED FIRMS
IRAKLIS APERGIS, Athens School of Economics and Business,
SOFIA ELEFTHERIOU, University of Piraeus
ABSTRACT
This paper seeks to test the outcome and substitution agency models of dividends at different stages of the
corporate life-cycle. In a sample of Greek listed firms, the empirical analysis shows that the outcome
model of dividends, which predicts that dividend payout increases in the strength of shareholder rights,
prevails along the corporate life-cycle, but only where creditor rights are strong. Therefore, the agency cost
of equity and debt versions of the outcome model of dividends holds. The findings document no evidence
in support of the substitution model of dividends. Moreover, the results serve to highlight the profound
influence that creditors exert on corporate payout policies. When shareholders enjoy considerable legal
rights, but not so creditors, creditors demand, and firms consent to lower dividends.
Keywords: Dividend payout, corporate governance, Greek listed firms
1. INTRODUCTION
The fundamental goal of financial management is to maximize the current value per share of the stock
market. One substantial financial decision affecting this value maximization goal is the dividend policy. In
an early paper, Black (1976) coins the term the ‘dividends puzzle’ to illustrate the poor understanding of
dividend payment policy: “The harder we look at the dividend picture, the more it seems like a puzzle, with
pieces that just don’t fit together.” Over the years, dozens of theories have attempted to explain the
dividends phenomenon with no consensus reached. Many of the theories view agents as rational and
dividends either serve as an efficient way to resolve agency problems or as a signaling device to mitigate
information asymmetry problems. According to La Porta et al. (2000), and Brockman and Unlu (2009), the
strength of the legal rights afforded to the providers of capital to corporations influence the corporate
dividend policy. Moreover, the former relate shareholder rights, measure the corporate dividend payout,
and test two competing agency models of dividends, such as the outcome and substitution models.
Furthermore, the creditor rights influence dividend policies around the world by establishing the balance of
power between debt and equity claimants. Creditors demand and managers consent to a more restrictive
payout policy as a substitute for weak creditor rights in an effort to minimize the firm's agency costs of
debt.
2. LITRATURE ON DIVIDEND POLICIES AND CORPORATE COVERMENCE
According to Linter (1956), Linter (1962), Bhattacharya (1979), Miller and Rock (1985), the corporate
dividend policy is designed to reveal profit-earning prospects of a firm to their investors. Many empirical
studies provide evidence in favor of this model. Fama and Babiak (1968) argue that the firms set their
target dividend level and attempt to stick to it. Furthermore, based on the signaling approach, there may be
interrelations between dividend payout policy and agency costs of the firm (Jensen and Meckling, 1976;
Easterbrook, 1984). Dividend payout policy is an effect of the conflict between the insiders and the
outsiders. Jensen and Meckling (1976), Rozeff (1982), and Easterbrook (1984) favor agency cost
explanations for changes in dividends payout, while decomposing whether dividends can act as a method
to align manager’s interests with those of investors. Accordingly, the firm pays dividends in order to
reduce agency costs, as payment of dividends reduce the discretionary funds available to managers. Jensen
(1986) documents that in the presence of free cash flows, the firm pays dividends or retires its debts to
restrict the agency cost of free cash flow. Kalay (1982) explores a large sample of bond indentures
focusing on collision between shareholders and bondholders on the dividend decision.
The empirical observation by Lintner’s (1956) shows that firms gradually adjust dividends in
response to changes in earnings, that have acquired the status of a stylized fact on corporate dividend
policy. Initially, his work suggests that managers change dividends in response to unanticipated and non-
transitory changes in their firm’s earnings, and they have reasonably defined policies in terms of the speed
with which they adjust dividends towards a long-run target payout ratio. Empirical studies, such as
Lintner’s (1956), have confirmed Fama and Babiak (1968) original findings.
263
Another strand in the literature compares dividend payout to firms’ life cycle. Especially, a great
number of papers observe that the firms that pay dividends, tend to be more mature and unpredictable.
Grullon et al. (2002) argue that firms increase or decrease dividends experience a future decline or increase
in their profitability. The authors argue that firms exhaust their investment opportunities, increase their
dividends, and hence dividends display firm maturity rather than signalling future profitability.
Several papers highlight the link between dividends and idiosyncratic risk. In particular, Venkatesh
(1989) shows that idiosyncratic risk and the informational content of earnings fall, following dividends
initiation. Moreover, Fink et al. (2006) document that dividend-paying firms have lower idiosyncratic
volatility. Furthermore, Bradley et al. (1998) and Chay and Suh (2008) explain the association between
dividends and volatility. Only firms with low cash-flow uncertainty feel comfortable in committing to
paying dividends, an attitude consistent with the conservative managerial views by Lintner (1956) and
Brav et al. (2005). According to Hoberg and Prabhala (2009), the disappearance of dividends (Fama and
French, 2001) is associated with an increase in the idiosyncratic risk.
3. DATA
In this study we examine the relationship between the strength of corporate governance and corporate
dividend policy for manufacturing listed firms in Greece along their corporate life-cycle. To measure the
strength of corporate governance, we follow Mitton (2004) and use the corporate governance scores
analysis, developed by Credit Lyonnais Securities Asia (CLSA, 2001). The CLSA governance ratings
range from 0 to 100 with higher values suggesting better corporate governance. We also employ the
dividend payout yield, as dividends per earning measured as cash dividends paid to common and preferred
shareholders. The firm size is measured as their total assets and the profitability is measured as earnings
per share. Moreover, the firms’ growth is measured through the capitalization metric of the listed firms,
firms’ cash is their cash flows, and total equity is measured as total shareholders’ equity, scaled by book
assets. Size and profitability are expected to impact positively on dividend policy. By contrast, high growth
firms typically pay smaller dividends. Finally, the expected relationship between cash and dividend pay-
out is ambiguous. All data are on a daily basis and are sourced from DataStream, spanning the time range
from 2004 to 2014, with the total sample consisting of 15 listed firms from the Athens Stock Exchange.
4. EMPIRICAL ANALYSIS
The empirical model is well described by the following equation:
it it 1 it 2 it 3 it 4 it 5 it 6 EQ I t + εit (1)
where: it is dividends to yield, it is the corporate governance score for each firm i, itis
total assets, it is displays the capitalization of the firm, it is earnings per share, itis cash
flows per share, and EQitis equity. αit denotes the presence of fixed effects.
We resort to the following first generation unit root tests: the MW test (Maddala and Wu, 1999), the
Choi test (Choi, 2001), the LLC test (Levin et al., 2002) and the IPS test (Im et al., 2003), that are all based
on the assumption of independent cross-section units. The results for the panel unit root tests are provided
in Table 1. They recommend that for all variables except for the score variable, we can reject the null
hypothesis of unit root at the 1% significance level.
264
Table 1. Panel unit root tests
________________________________________________________________________
Variable MW test Choi test LLC test IPS test
________________________________________________________________________
DIVY 4.85 1.19 3.48 -1.39
ΔDIVY -9.81 -10.73 -11.52 -7.64
SCORE -8.39 -9.08 -10.22 -8.52
ASSET 3.18 1.15 3.16 -1.57
ΔASSET -9.04 -11.77 -13.27 -9.24
MARV 3.16 1.27 4.11 -1.58
ΔMARV -9.63 -11.37 -12.19 -8.75
EPS 2.99 1.16 3.53 -1.29
ΔEPS -8.75 -10.62 -13.22 -8.04
CFPS 3.17 1.19 3.25 -1.48
ΔCFPS -9.68 -12.31 -13.46 -9.55
EQ 2.75 1.14 3.48 -1.36
ΔEQ -8.79 -10.62 -12.35 -8.17
________________________________________________________________________
Critical values at the 1%, 5% and 10% significance levels are respectively: MW [7.57, 6.41, 5.41], Choi
[2.33, 1.64, 1.28], LLC [-2.33, -1.64, -1.28], IPS [-2.33, -1.64, -1.28].
Given the panel unit root results in Table 1, the empirical analysis proceeds by estimating equation (1)
through a fixed effect OLS regression. The presence of specific factors in each listed firm can be tested by
the hypothesis that there exist significant individual effects in the estimated regression through a joint
restrictions F test. If the value of the F statistic exceeds the critical value, there is evidence that specific
corporate effects are present in the estimated model. The F test (H0: fixed effects = 0) results suggest that
using the panel data methodology provides relevant information gains, and in this case, the OLS estimation
may generate biased results. As the panel data methodology is the most appropriate, the issue now is to
choose the estimation method for fixed effects (FE) or random effects (RE). In the case, in which the used
data are not random extractions from a larger sample, the fixed effects model is the most appropriate
estimation methodology. Furthermore, in the fixed effects model, the estimator is robust to the omission of
relevant explanatory variables that do not vary over time, and even when the random effects’ approach is
valid, the estimator of fixed effects is consistent, only less efficient. Therefore, the estimation by fixed
effects appears to be the most appropriate for our empirical purposes. Table 2 reported the fixed effects
findings.
Table 2. Fixed effects estimates
________________________________________________________________________
Variable Coefficient p-value
________________________________________________________________________
Intercept 0.219 [0.38]
SCORE 0.928*** [0.00]
ΔASSET 0.672* [0.07]
ΔMARV -2.85** [0.04]
ΔEPS 0.455** [0.05]
ΔCFPS 1.247*** [0.01]
ΔEQ 2.108** [0.02]
Diagnostics 0.64
R2-adjusted
Hausman test [0.00]
No. of firms 15
________________________________________________________________________
265
Figures in parentheses denote p-values. *, ** and *** denote statistical significance at the 10%, 5% and
1% level, respectively.
The findings presented in Table 2 are in line with Mitton (2004) and provide support in favor of the
outcome model of dividends. The coefficient estimate on the corporate governance variable (SCORE) is
positive and statistically different to zero. Its value turns to be 0.328 (p<.01). This coefficient estimate
implies that a one percent change in corporate governance, changes dividend payout by 0.93 percentage
points.
The firm-level control variables are of the correct sign. Large (ΔASSET) and profitable (ΔCFPS)
firms pay higher dividends. Growth (ΔMARV) firms tend to pay lower dividends. Furthermore, and
consistent with the life-cycle model of dividends, dividend payout increases with corporate maturity i.e.
when earnings per share (ΔEPS) increases.
Overall, the findings are consistent with Mitton (2004) and provide support for the outcome model
of dividends. Shareholders use their legal rights, in this instance measured at the firm-level, to extract large
dividends from firms. All else equal, dividend payouts are greater in better governed firms.
The reported Hausman specification test has been used to determine which one of the alternative
panel analysis methods (fixed effects model and random effects model). With respect to the test, The null
H0 hypothesis claims that “random effects exist”, while the alternative H1 hypothesis claims that “random
effects do not exist”. The results in Table 2 illustrate that the H0 hypothesis is rejected at the 1%
significance level, thus, not all of the individual effects in the dividend yield model are random, but are
fixed. In other words, the H1 hypothesis is valid according to which the fixed effects model is more
effective than its random effects counterpart.
5. CONCLUSION
This paper tested the outcome and substitution model of dividends, recommended by La Porta et al. (2000)
along the corporate life-cycle. In particular, it tested the hypothesis that the outcome model of dividends
can explains the ability of firms to pay higher dividends, either as an outcome of strong governance, or a
substitute for weak governance, is contingent on strong creditor rights.
Using a sample of 15 firms from the Greek stock market, the analysis provide supportive evidence
on that the outcome model holds along the corporate life-cycle. In other words, at all stages along the
corporate life-cycle, better-governed firms pay larger dividends than their poorly-governed counterparts. It
also showed that they can only do so where creditor rights are strong. These findings are in line with those
of Brockman and Unlu (2009), Shao et al. (2009) and Byrne and O’Connor (2012) which show that the
agency cost of equity and debt version of the outcome model of dividends holds, i.e. dividend payouts are
largest where shareholder and creditor rights are strong.
REFERENCES
Black, F. (1976) The Dividend Puzzle, Journal of Portfolio Management, 2, 5-8.
Bradley, M., D.R. Capozza, P.J. Seguin. 1998. Dividend policy and cash-flow uncertainty. Real Estate
Economics 26, 555-580.
Brav, A., J.R. Graham, C.R. Harvey, R. Michaely. 2005. Payout policy in the 21st century. Journal of
Financial Economics 77, 483-527.
Brockman P, E. Unlu. 2009. Dividend policy, creditor rights, and the agency costs of debt. Journal of
Financial Economics 92, 276-299.
Brockman P, E. Unlu. 2011. Earned/contributed capital, dividend policy, and disclosure quality: an
international study. Journal of Banking & Finance 35, 1610-1625.
Byrne, J., T. O’Connor. 2012. Creditor rights and the outcome model of dividends. The Quarterly Review
of Economics and Finance 52, 227-242.
Chay, J.B., J. Suh. 2008. Payout policy and cash-flow uncertainty. Journal of Financial Economics 93, 88-
107.
Easterbrook, F. 1984. Two agency cost explanations of dividends. American Economic Review 74, 650-
659.
266
Grullon, G., R. Michaely, B. Swaminathan. 2002. Are dividend changes a sign of firm maturity? Journal of
Business 75, 387-424.
Fama, E., H. Babiak. 1968. Dividend policy: an empirical analysis. American Statistical Association
Journal 63, 1132-1161.
Fama, E.F., K.R. French. 2001. Disappearing dividends: Changing firm characteristics or lower propensity
to pay? Journal of Financial Economics 60, 3-43.
Fink, J., K.E. Fink, G. Grullon, J.P. Weston. 2006. Firm age and fluctuations in idiosyncratic risk. Working
Paper, James Madison University and Rice University.
Hoberg, G., N.R. Prabhala. 2009. Disappearing dividends, catering, and risk. Review of Financial Studies
22, 79-116.
La Porta, R., F. Lopez-de-Silanes, A. Shleifer, R. Vishny. 2000. Investor protection and corporate
governance. Journal of Financial Economics 58, 3-27.
Lintner, J. 1956. Distribution of income of corporations among dividends, retained earnings and taxes.
American Economic Review 46, 97-113.
Mitton, T. 2004. Corporate governance and dividend policy in emerging markets. Emerging Markets
Review 5, 409-426.
Shao, L., C. Kwok, O. Guedhami. 2009. Dividend policy: balancing interests between shareholders and
creditors. Working Paper, University of South California.
Venkatesh, P.C. 1989. The impact of dividend initiation on the information content of earnings
announcements and returns volatility. Journal of Business 62, 175-197.
267
A STATISTICAL STUDY ON THE EQUITY MUTUAL FUNDS IN GREECE
MARIA PANTA
Abstract
The present paper deals with the evaluation of the Greek equity mutual funds market performance. The
data on which the empirical study is based are monthly and cover the period 03/01/2008 to 31/12/2014.
The paper examines the mutual funds selectivity and market timing abilities, in accordance with the
standard Capital Asset Pricing Model. By further specializing our analysis, we will evaluate their
performance in order to get to useful conclusions.
1. Introduction
The Capital Asset Pricing Model consists of an extension of the portfolio theory as it has originally been
expounded by Markowitz [7]. It describes the relationship between a capital asset’s expected performance
and its level of risk (beta coefficient); it has been developed by W. Sharpe [10] and Jon Mossin [8].
Several studies have historically been conducted on the mutual funds performance [1], [2], [4], [5], [9]. In
1966, Sharpe [10], author of one of those studies, examined the risk-adjusted performance of 34 mutual
funds over the period 1954-1963 and showed that 19 out of 34 mutual funds delivered a higher
performance than the market portfolio. Sharpe’s study sustains that the market is efficient and that
competent funds managers are able to diversify their portfolios accordingly, while assessing risks properly,
ensuring in that way a good performance. Following Sharpe’s line of thought, Jensen [4] studied 115
mutual funds between 1945 and 1964. Taking into consideration the transactions cost, he found that only
43 out of 115 portfolios outperformed the market index. Friend, Blume and Crockett [3] measured 136
mutual funds for the period from 1960 to 1968 and concluded that mutual funds did not outperform
uniformly-distributed random portfolios.
2. Methodology
The implementation of the method was derived from the monthly data of 13 Greek domestic mutual funds
in the period from 03/01/2008 to 31/12/2014. The monthly performances of both the mutual funds and the
General Index of the Athens Stock Exchange have been taken into account in order to calculate the
formula Yi,t log Pi,t , where Pi,t is the value of the equity mutual fund i at the end of the time period
Pi ,t 1
(month) t and Pi,t-1 the value at the end of the time period t-1.
Let us consider the model of the market for a mutual fund:
Yt a b X t ut , for each t=1,2,…,n
where:
Yt : the excess return of the equity mutual fund,
X t : the excess return of the market,
: the expected (average) excess return of the mutual fund when the
net (average) return of the General Index is zero,
b : the systematic risk of the equity mutual fund i,
ut : the disruptive condition.
If α>0, then the mutual fund has achieved a higher return than the General Index and, as a result, is more
manageable than a mutual fund with a zero α ratio. In that case, we say that the mutual fund entails
selectivity, meaning that its manager has succeeded in combining shares into a portfolio in such a way that
he achieves a higher performance than the market.
A good mutual fund is one which not only exhibits selectivity, but has also what we call market timing
ability. It is well known that when the market is bullish, i.e. when X t >0, most stock prices are going up. It
is expected in such a case that the mutual fund manager will tend to be more exposed to risk and will buy
stocks. When the market is bearish, then most stock prices are falling and it should be logical to expect a
269
minor exposure to risk on part of the fund manager, who is more likely to buy CDs or to keep some of his
money in cash. Consequently, one should expect a positive relationship to occur between risks assessment,
as it is expressed by β, and the market situation.
It means, to begin with, that β cannot remain unaltered through time, in other words we should have this
kind of model:
Yt a t X t ut .
If we assume that the relationship between β and the market situation is linear, then we shall have this
equation:
t kXt , με k 0
and the model above shall take the form of a second-degree polynomial:
Yt a X t kXt 2 ut .
So, what we come to is a second-degree equation. The assessment of this equation with the least squares
method allows us to examine both the selectivity, i.e. α>0, and the market timing ability, i.e. k>0. In Table
1, one can see the ratio assessments for each mutual fund, shown in columns, the t values (t-statistic),
presented between brackets, as well as the values of the coefficient of determination R2 and the values of
the Durbin-Watson ratio (autocorrelation test).
3. Results
We can notice in the table below (Table 1) that no equity mutual fund do entail selectivity, because the
absolute value of the t-statistics do not exceed 1,96 at 0.05 level of significance. In addition, the fund
manager has achieved market timing only for two mutual funds (Α7,A9), since the absolute value of the t-
statistics exceed 1,96 at 0.05 level of significance. The timing ratio is hardly 20%. We also notice that the
R2 coefficient is quite high, which means that the model interprets the data to a great extent. Moreover,
the Durbin-Watson ratio is near the value of 2, which implies that we do not have any autocorrelation in
the residuals.
Table 1
Mutual Selectivity Test Market Timing Test Durbin- R2
fund (α) (k) Watson
A1 0,91
A2 -0,002555 0,28229 1,91 0,98
A3 (1,11) (1,5) 2,17 0,88
A4 1,74 0,84
A5 0,000914 -0,0597 1,78 0,95
A6 (1,38) (0,75) 1,85 0,82
A7 0,4929 2,08 0,97
-0,002544 (1,8) 1,99
(0,92) 0,2567
(0,56)
-0,001407 -0,081
(0,49) (0,46)
-0,33
-0,001033 (1,03)
(0,59) 0,2588
(2,78)
-0,001055
(0,35)
0,000214
(0,22)
270
A8 -0,017314 0,3544 2,45 0,66
(3,06) (0,57) 2,08 0,95
0,677 1,88 0,79
A9 0,001746 (4,8) 2,07 0,95
(1,21) -0,0553 1,95 0,96
(0,19) 2,07 0,92
A10 0,002455 -0,27
(0,84) (2,07)
-0,18
A11 -0,001737 (1,62)
(1,23) 0,27
(1,92)
A12 0,000674
(0,53)
A13 -0,002381
(1,51)
We notice that the General Index of the Athens Stock Exchange rates second according to the Sharpe and
Treynor indexes. It means that the mutual funds (except Α2) have a lower rate of return in relation to the
risk, in comparison with to the market portfolio represented by the General Index.
4. Conclusions
Based on the results, it is safe to say that the evaluation of mutual funds is a quite complex procedure and
depends on many variables. In this paper, we have observed that mutual funds did not entail selectivity,
while presenting a rather low market timing ability, during the time period from 03/01/2008 to 31/12/2014.
We have also used simple methods that are easily perceived and assessed by investors. Far more elaborated
methods have been developed in the international literature, which make use of more variables and tend to
achieve more precise results. It is therefore necessary for the mutual funds assets management to become
more substantial, in order to provide a more attractive portfolio performance than the standard market
return, with the ultimate aim to draw new domestic and foreign investors.
Bibliography
[1] Edelen Rorer M. and Warner Jerold B., (2001), Aggregate Price Effects of Institutional Trading: A
study of Mutual Fund Flow and Market Returns, Journal of Financial Economics, 59(2), pp. 195-220.
[2] Dahlquist Magnus, Engstrom Stefan and Soderlind Paul, (2000), Performance and Characteristics of
Swedish Mutual Funds, Journal of Financial and Quantitative Analysis, 35(3), pp. 409-23.
[3] Friend, M. Blume and J. Crockett, Mutual Funds and Other Institutional Investors: A New
Perspective, McGraw-Hill Book Co, New York, 1970.
[4] Jensen, Michael C., (1968), The performance of mutual funds in the period 1945-1964, Journal of
Finance 23, 389-416.
[5] Kaimakamis, George. "A note on the performance of Mutual Equity Funds." Communications in
Mathematical Finance 3.1 (2014): 31-37.
[6] Koulis, Alexandros, et al. "An Assessment of the Performance of Greek Mutual Equity Funds
Selectivity and Market Timing." Applied Mathematical Sciences 5.4 (2011): 159-171.
[7] Markowitz, H., Portfolio Selection, The Journal of Finance, 7(1), (1952), 77-91.
[8] J. Mossin, Equilibrium in a Capital Market, Econometrica, 34 (1966), 768-783.
[9] N. Philippas and C. Psoma, Equity Mutual Fund Managers Performance in Greece, Journal of
Managerial Finance, 27(6), (2001), 68-74.
[10]Sharpe, William F., (1966). Mutual Fund Performance. Journal of Business 39, Part 2: pp 119-138.
271
CREATING A PARTIAL SUPPLY CHAIN REFERENCE MODEL FOR THE ENERGY INDUSTRY
SOTIRIS P. GAYIALIS, National Technical University of Athens
DIMITRIOS-ROBERT I. STAMATIOU, National Technical University of Athens
STAVROS T. PONIS, National Technical University of Athens
NIKOLAOS A. PANAYIOTOU, National Technical University of Athens
ABSTRACT
Contemporary supply chains are undoubtedly very complex and involve many autonomous organisations
with a variety of business processes and a series of decisions and risk which affect the variability of
demand. The energy supply chain is no different, even though it presents much dissimilarity to
manufacturing supply chains. In this paper, the methodology of creating a process reference model for the
energy supply chain is described. The generic reference model “SC REMEDY” (Supply Chain Reference
Model for Managing Demand Variability), has been created through the research project “ODYSSEUS”. A
hybrid top-down and bottom-up methodology was followed, through which the partial reference model
was created. The generic SC REMEDY model was adopted for the top-down approach. Three case studies
of companies participating in different tiers of the energy supply chain were examined and their specific
supply chain processes were analysed. Bearing in mind the literature on energy supply chain management
and through the collation of the REMEDY processes and company specific processes, a partial reference
model was created for the energy industry. This partial reference model focusses on demand variability
management, benefits from the use of the decision, knowledge, IT and risk views of the generic reference
model and its new characteristics and particularities are described in detail. The outcome of this study is
both the validation of the SC REMEDY generic reference model through its instantiation to a partial model
and the creation of a reference model which can be applied in particular energy supply chains.
KEYWORDS
Energy, Process reference model, Process modelling, REMEDY model, Supply chain
INTRODUCTION
Supply chain management (SCM) is recognized as the integration of key business processes across the
supply chain (Croxton et al. 2001). The supply chain is not just a chain of businesses with one-to-one,
business-to-business relationships, but a network of businesses and relationships. SCM deals with the
synergy of intra- and inter-company integration and eventually refers to total business process excellence
and represents a new way of managing the business and relationships between the members of the supply
chain (Lambert and Knemeyer 2007).
Supply chain performance can be improved through supply chain integration, and to that purpose,
reference models provide the necessary standardized processes that can support the construction of links
and relationships between companies participating in the network (Ponis et al. 2015). Reference models
are generic conceptual models that formalize recommended practices for a certain domain (Rosemann and
van der Aalst 2007). Reference models provide extended processes for optimal performance. The main
characteristics of these models are their reusability, their adaptation of best practices and their universal
applicability. These characteristics are met to the generic reference models as well as the partial reference
models.
Although there are several research efforts for modeling the supply chain processes, only a few reference
models for supply chain identified in the literature (Gayialis et al. 2015a). The supply chain REMEDY
model (Reference Model for Managing Demand Variability) introduced in the work of Ponis et al. (2013),
as a generic reference model in the form of a business process repository of process models and process-
related information in a dynamic and easy to reuse format. It deals with the management of demand
variability in the supply chain. REMEDY model could be used for business process design or redesign and
for business process improvement, being a benchmark for best practice analysis. REMEDY model is
generic enough in order to be applied in a variety of supply chains. Nevertheless the instantiation of a
generic reference model to a set of partial reference models could make easier and more effective the
adaptation of a reference model to a specific case.
273
The creation of the reference model is based on a hybrid top-down and bottom-up methodological
approach presented in the work of Gayialis et al. (2013). According to this approach, reference model
creation integrates knowledge both from other well established supply chain reference models, like SCOR
(Supply Chain Operations Reference Model), GSCF (Global Supply Chain Framework) and SAP business
models and supply chain business processes of real-life companies. Reference model instantiation is the
final step of the methodological approach in order to validate its outcomes and the reference model itself.
In addition, the creation of partial reference models makes REMEDY model more useful and easily
applicable in a diversity of supply chains as there are more specific versions of the generic model. So, the
generic supply chain reference model for demand management is set up as configurable model that enable
rapid instantiation of specific supply chain configurations for various industries. The approach followed for
the development of the REMEDY generic reference model, as well as the development of partial reference
models, using the knowledge transferred from specific supply chain processes of a diversity of case
studies, is presented in Figure 2.
This paper presents the development of a partial reference model for the supply chain of the energy sector
which is achieved through the specialization of the REMEDY generic reference model. After this
introductory section, a short literature review of the energy supply chain and its characteristics follows.
Then, a short description of the REMEDY model is outlined and the methodological approach for the
development of the partial reference model for the energy sector is defined. Finally, the conclusions and
further research issues are summarized.
ENERGY SUPPLY CHAIN
The energy supply chain is considered of great importance in global economy, while the continuous flow
of oil and gas are important to the economic health of both developing and developed economies (Enyinda
et al. 2011). The energy industry can be described as a typical supply chain where strategic, tactical, and
operational decisions may arise in it. Management of the energy supply chain is a complex task due to the
large size of the physical supply network which dispersed over vast geography, complex production
operations, and inherent uncertainty (Saad et al. 2014). More specifically Shah et al. (2011) stated that
uncertainty in petroleum supply chain arises in realistic decision making processes and has a huge impact
on the refinery planning activities. Three major uncertainties that should be considered in refinery
production planning include: market demand for products; prices of crude oil and the saleable products;
and product or production yields of crude oil from chemical reactions in the primary crude distillation unit.
The oil and gas industries, producing and distributing petroleum and natural gas products, are the most
significant in energy supply chain. Oil and gas supply chain is also known to be a very complex chain
compared to other industries (Hussain et al. 2006). This supply chain is divided into upstream and
downstream operations (before and after the refining stage). Crude oil has to go through a complex, capital
intensive refinery process and the transportation of petroleum products involves various means of transport
such as ships, pipelines, rail and road, often leading to high transportation costs (Ribas et al. 2011).
Natural gas has differences and similarities in upstream and downstream operations in relation to
petroleum products. In upstream stages, exploring and extracting crude oil and natural gas is similar, but
there are no refining operations for natural gas. In downstream operations, the transportation of natural gas
includes mainly pipelines from its source to the end customers. In addition, ships and trucks are used only
for natural gas liquids. A detailed representation of oil and natural gas supply chains is given by the
American Petroleum Institute (API 2013a page 3; API 2013b page 5).
Oil and gas companies face their supply chain configuration and coordination systems as worthy of
improvement. Making necessary improvements over time allows the companies to gain competitive
advantages in the marketplace. Any troubles arising in the global supply chain can have remarkable
adverse effects in achieving operational efficiency, maintaining quality, profitability, and customer
satisfaction (Saad et al. 2014). The adverse events could happen due to uncertainty in supply of crude,
demand, transportation, market volatility, and political conditions. Therefore, Shah et al. (2011) identify
that in order to effectively model a supply-chain design problem, the dynamics of the supply chain must be
considered, while Enyinda et al., 2011 recognize that critical decisions and risks must be modelled and
analysed too.
274
REMEDY PROCESS REFERENCE MODEL
As described in previous publications (Gayialis et al. 2013; Ponis et al. 2013; Ponis et al. 2015), the
REMEDY process reference model was created through the research program “ODYSSEUS” (A Holistic
Approach For Managing Variability In Contemporary Global Supply Chain Networks). It aims to provide
a reusable process framework for companies in different industries to organize their supply chain processes
in an effort to manage demand variability and its effects on their supply chain. The reference model
attempts to portray processes and decisions that presuppose the participation of more than one supply chain
actors and extends to the interactions with customers and suppliers. It provides a high level of abstraction
that allows for the models application, after the required adaptation, in various industries and
organisations. It consists of a complex of diagrammatic techniques, mathematical modelling methods and
verbal descriptions and takes into consideration the interactions of the selected organisation with its
suppliers and customers. The diagrammatic techniques used cover multiple views, namely: process,
organisational, risk, decision and IT views. Mathematical modelling supports decision making in make-
buy situations and information sharing. Verbal descriptions support the modelling techniques selected
based on the latest literature and analyse the produced models in order to provide easy comprehension to
all users.
The diagrammatic technique portrays nine functions of a supply chain through a value chain. Each element
of the value chain is analysed by a collection of its functions, both strategic and operational, through a
function tree. All functions are analysed through extended Event-Driven Process Chain (eEPC’s) that
depict high level process flows. These process flows are supported by elements such as related risks,
decisions, organisational actors or IT systems and their relationships are described via numerous Function
Allocation Diagrams (FAD). Supportive diagrams depict collections of these elements. More specifically,
risk trees group the types of risks depicted in the model; decision trees group related decisions;
organisational charts depict the organisational structures implicated in the management of the supply
chain; IT trees show the hierarchy of supportive systems. In the following table, some statistics on the
diagrammatic techniques are presented.
Table 1: Generic reference model diagram statistics Number of diagrams
Modelling technique 1
Value Chain 9
Function Tree 92
Extended Event-Driven Process Chain 14
Risk Tree 9
Decision Tree 3
Organisational Chart 1
Application System Diagram
The relationships of the selected modelling techniques can be seen in the following figure (Figure 1). The
modelling techniques selected allow users to implement additional modelling methods, such as knowledge
diagrams or decision flow models, in order to depict additional and company specific information.
275
Value Chain Diagram
Applications Systems Type Diagram
Function Tree eEPC
FAD
Organization Chart
Figure 1: Integrated modeling methods of the reference model (Gayialis et al. 2015b)
The generic REMEDY reference model describes nine value adding functions of contemporary supply
chains. These are nine functions that group strategic and operational supply chain processes into
corresponding function trees. The functions are: Define Supply Chain Strategies, Customer Relationship
and Service Management, Product Development and Commercialisation, Supplier Relationship
Management, Develop KPI Framework, Demand Management, Order Fulfilment, Manufacturing Flow
Management, and Returns Management (Ponis et al. 2013).
The generic reference model has been adapted in three industries thus far, with the use of different
methods, and has proven its versatility. The aforementioned industries are discrete manufacturing,
construction and energy. The latter is described in this publication. These adaptations have provided
corresponding partial reference models. The partial models, while still abstract, are specific to the
industries they describe and take into consideration many particularities of these industries.
METHODOLOGY
In order to develop the partial reference model for the energy industry, the hybrid top-down and bottom-up
methodology was selected as described in Gayialis et al. (2013). In a top-down bottom-up methodology, a
generic model is selected and contrasted with as many cases as possible from a specific industry.
According to this methodology, reference model creation integrates best practices embedded in other well
established supply chain reference models, like SCOR, GSCF and SAP business models and the lessons-
learned during supply chain business processes modeling efforts in various cases studies (Figure 2). In
order to instantiate the generic reference model in a set partial reference models for different industry
sectors, the detailed business process models of a specific industry sector where integrated into the
REMEDY generic model.
276
Figure 2: The methodological approach for generic and partial reference models development
The REMEDY generic reference model served for the top-down approach in order to create the partial
supply chain reference model of the energy industry sector. Three case studies were performed in order to
discover supply chain processes in companies of the energy sector. More specifically, for the bottom-up
study, three companies were studied: a natural gas company operating in downstream stages, a petroleum
company operating in sales and distribution and an oil refinery operating as a. These companies were
selected because of their different positions in the energy supply chain in an attempt to create a model as
accurate as possible for the energy sector. These case studies have been performed for past research
projects and their complete process documentation was available to the research team. The resulting model
was described, through the ARIS methodology (Scheer 1999), using the same diagrammatic techniques
used in the generic reference model. The mathematic models were not examined in this study and the
verbal descriptions were updated in order to describe the new model.
PARTIAL REFERENCE MODEL FOR THE ENERGY SECTOR
The partial reference model created through the aforementioned methodology is focused, as expected, on
the particularities of the energy sectors’ supply chain. The differentiation starts from the highest level, the
value chain, where two of the entities have been replaced. Compared to the generic reference models’
value chain, the “Returns management” group of processes has been replaced by “Claims management”
and the “Manufacturing flow management” group of processes has been replaced by “Supply, inventory
and production flow management”. The rest of the value chain entities remain unaltered (Figure 3).
Figure 3: Partial reference model value chain for energy industry
Despite the fact that the value chain entities remain unchanged, significant changes occur at lower levels of
modelling, starting with function trees and moving all through the modelling techniques in use (eEPC’s,
FAD’s, Organisational charts, etc.). Some of the basic characteristics of the partial reference model are
analysed in the remaining passage.
As mentioned before, the energy market is a highly fluctuating market. This leads to the need for strategies
to adapt to this parameter fast and easy. Selecting the appropriate strategies for all supply chain functions is
277
extremely important. The ‘Determine Supply Chain Management Strategies’ function tree is comprised of
eight strategic/long-term functions, each corresponding to one of the other functions in the value chain, and
no operational functions. These functions describe the strategy defining and decision making processes for
each supply chain function and are very important for the success of the entire supply chain strategy and,
in essence, the success of an organisation in the energy market.
Customers in the energy sector can be separated in B2B and B2C. It is often the case that B2B customers
have different contracts with the company and that quantities are bought in bulk. In both cases, customers
and company are bound by highly intricate contracts. In case the contract terms are breached by either part,
the other is entitled to claims. The ‘Customer Relationship and Service Management’ function tree is
comprised of five strategic/long-term functions that dictate how customer relationships and customer
service will be managed and eleven operational/short-term functions that describe day to day processes and
their performance measurement. This group of processes is particularly important since it provides the
organisation with a direct window to demand in the next supply chain tier.
Although the energy industry is one where products are, in general, specific and unchanged, one must not
forget the service as a product trait. New products may not cause disruptions to the supply chain of an
energy company but, new services may affect the supply chain just as much as innovative products disrupt
manufacturing supply chains. The ‘Product Development and Commercialisation’ function tree contains
five strategic/long-term functions and eight operational/short-term functions. The first group of functions
describes the strategies behind how product development ideas are selected, developed and forwarded to
the market. The latter group describes the executive side of these strategies and includes performance
measurement for all processes in the function.
In the energy industry suppliers play an important role in the products final price and availability. This
means that relationships with suppliers should be handled skilfully. The ‘Supplier Relationship
Management’ function tree includes four strategic/long-term functions and seven operational/short-term
functions. Strategic/long-term functions provide guidelines for supplier selection, contract management
and strategic partnerships. The operational/short-term functions describe the day to day processes executed
with regard to suppliers and measure performance of all the processes executed in this group of functions.
The ‘Develop Framework of Metrics’ function is more of a supportive function than a strictly strategic
one. Despite this, the selected framework of metrics should be selected and constructed carefully since the
information it provides to a company is crucial in order to select strategies or investigate cases of low
performing processes. Its function tree is comprised of eight strategic/long-term functions, each
corresponding to one of the eight entities of the value chain. A good framework of metrics should allow for
information flow between functions and between different levels of organisational hierarchy.
‘Demand Management’, just as in the generic reference model, is the core function of the partial reference
model. It is the first operational function in the value chain and its function tree includes four
strategic/long-term functions and five operational/short-term functions. The strategic/long-term functions
relate to Information flow planning, forecasting method selection and error management guidelines. The
operational/short-term functions relate to the per se forecasting process, collection of data, synchronisation
of the forecast to the real demand and measuring performance of all related processes. Good forecasts are
important in the energy industry since they dictate contract strategies and claims arrangements.
Following Demand Management, the ‘Order Fulfilment’ function is the second operational function. It acts
as a window towards demand since it relates to orders from the point of entry to the collection of feedback
after delivery. Three strategic/long-term and six operational/short-term functions comprise its function
tree. Strategic/long-term functions relate to evaluating the distribution network capacity, designing order
fulfilment processes and analysing order fulfilment requirements. Operational/short-term processes relate
to the sequence of actions executed from the moment an order is received to the moment it’s delivered and
the performance measurement of all related processes.
The case studies performed provided information regarding a new function to be introduced, namely
“Supply, inventory and production flow management”. As mentioned above, this function replaces the
“Manufacturing flow management” function of the generic reference model. In essence, it is more of a
specialisation of the generic function that has been enriched with new processes and de-cluttered from
unrelated processes. It became apparent that energy companies, in some cases, categorise supply and
inventory management as embedded or related to manufacturing. The case is that traditional
278
manufacturing, as comprehended through the generic reference model (flow of materials and parts towards
assembly), simply does not exist in the energy sector. The new function includes seven processes in total:
three strategic/long-term and four operational/short-term processes. On one hand, strategic processes aim
to define production and inventory management strategies and produce guidelines for their
implementation. On the other hand, operational processes deal with the actualisation of the selected
strategies and include fields of differentiation between the natural gas and petroleum products. All
processes are measured and evaluated through the KPI framework.
Figure 4: Function tree of the group of processes: “Supply, Inventory and Production Flow Management”
Through the case studies it became apparent that energy companies do not or cannot, practically, return
products they have been supplied with. In the case of the natural gas company, gas runs through
international pipelines and is consumed at the point of the end user. In the other two cases, quality control
checks indicate whether the products will be sold to primary or secondary markets, but returns are non-
existent. Thus, the returns function was replaced by the claims management function as the new function
has a similar impact on the energy supply chain as returns have on a manufacturing supply chain. Claims
may occur for numerous reasons, for example: inability of client to meet his side of the contact; financial
restrictions; erroneous client forecasting. The claims management function includes a total of six
processes: two strategic/long-term and four operational/short-term processes. The strategic processes aim
to produce guidelines for the avoidance of claims, in an attempt to minimise the effects of a claim, and to
determine policies for debit/credit reimbursement. Operational processes, on the antipode, deal with the
whole range of tasks to be performed in order to either go through with the claim management or to reject
a claim request and evaluating the processes followed through a carefully designed KPI (Key Performance
Indicator) framework.
279
Figure 5: Part of the eEPC of the process: “Determine gas and petroleum strategy boundaries”
The partial reference model differs to the generic reference model, except from the functions mentioned
above, in a large number of processes. From minor tweaks to significant modifications, alterations were
performed on a total of forty processes (out of ninety-two processes in the generic reference model). These
changes extend from minor verbal alterations that intend to make the model specific to the energy industry
to the deletion of whole functions or the insertion of new ones. In some cases, exclusive gateways were
added in order to signalise the different process flows followed depending on the energy product being
marketed. In addition to the changes in the value chain and functions, the risk, IT and decision views of the
generic reference model have been adjusted to follow the results obtained through the case studies. In total,
this specific partial model contains one value chain, nine function trees, eighty-six eEPCs and a sum of
twenty-five diagrams depicting the organisational, decision, risk and IT views. All diagrams are
accompanied by detailed verbal descriptions.
CONCLUSIONS
This paper presents the application of business process modelling in the development of a supply chain
reference model for the energy sector. This reference model is based on the instantiation (specialization) of
a generic reference model of the supply chain, the REMEDY model, which has been developed in the
context of a research project. This reference model is a 3-tier supply chain model (vendor-producer-
customer) and it is focused on demand management. It includes a set of business process models that
covers different aspects of the processes using the integrated ARIS methods. The instantiation of the
REMEDY generic model is based on the insights and knowledge from various case studies of the energy
sector, mainly oil and natural gas companies. This industry specific (partial) reference model makes clearer
its usefulness as it becomes more easily applied to specific energy businesses. Model instantiation should
go beyond the development of the partial reference model for energy industry sector and it should be
specialized in particular business models for specific energy supply chains and companies. Doing so, the
280
proposed energy reference model will prove its applicability and usefulness to support real-life problems in
increasing flexibility and reducing variability in contemporary supply chains.
REFERENCES
API (2013a) Energy: Understanding Our Oil Supply Chain, American Petroleum Institute, viewed
29March 2016, <http://www.api.org/~/media/Files/Policy/Safety/API-Oil-Supply-Chain.pdf>
API (2013b) Energy: Securing Our Natural Gas Supply Chain, American Petroleum Institute, viewed 29
March 2016, <http://www.api.org/~/media/Files/Policy/Safety/API-Natural-Gas-Supply-Chain.pdf>
Croxton K.L., Garcia-Dastugue S.J., Lambert D.M. and Rogers D.S. (2001), The supply chain
management processes. The International Journal of Logistics Management, 12: 13-36. doi:
10.1108/09574090110806271.
Enyinda C., Briggs C., Obuah E. and Mbah C. (2011), Petroleum Supply Chain Risk Analysis in a
Multinational Oil Firm in Nigeria, Journal of Marketing Development and Competitiveness, 5 (7): 37-44.
Gayialis, S.P., Ponis S.T., Tatsiopoulos I.P., Panayiotou N.A. and Stamatiou D-R.I. (2013), A Knowledge-
based Reference Model to Support Demand Management in Contemporary Supply Chains. In B. Janiūnaitė
& M. Petraite, eds. Proceedings of the 14th European Conference on Knowledge Management. Kaunas,
Lithuania: Academic Conferences and Publishing International Limited, pp. 236–246.
Gayialis S.P., Ponis S.T., Tatsiopoulos I.P., Panayiotou N.A., Stamatiou D.-R.I., Ntalla A.C. (2015a),
Development of a Business Process-enabled Supply Chain Reference Model to Support Demand
Uncertainty Management, in Proceedings of The 5th Multidisciplinary Academic Conference in Prague
2015 (The 5th MAC 2015), 16-17/10/2015, Prague.
Gayialis S.P., Ponis S.T., Panayiotou N.A., Tatsiopoulos I.P. (2015b) Managing Demand in Supply Chain:
The Business Process Modeling Approach, in Proceedings of the 4th International Symposium and 26th
National Conference on Operational Research, June 4-6, 2015, Chania, ISBN: 978-618-80361-4-7, pp. 73-
79.
Hussain, R., Assavapokee, T. and Khumawala, B. (2006), Supply Chain Management in the Petroleum
Industry: Challenges and Opportunities. International Journal of Global Logistics &Supply Chain
Management. 1 (2): 90-97.
Lambert D. and Knemeyer M. (2007), Measuring performance: the supply chain management perspective,
in Neely A. (ed.) Business Performance Measurement, Unifying Theory and Integrating Practice, Second
edition, Cambridge University Press: 82-112.
Ponis, S.T. Gayialis S.P., Tatsiopoulos I.P., Panayiotou N.A., Stamatiou D-R.I. and Ntalla A.C. (2013),
Modeling Supply Chain Processes : A Review and Critical Evaluation of Available Reference Models. In
Y. Siskos, N. Matsatsinis, & J. Psaras, eds. 2nd International Symposium and 24th National Conference on
Operational Research. Athens, Greece: Hellenic Operational Research Society: 270-276.
Ponis S.T., Gayialis S.P., Tatsiopoulos I.P., Panayiotou N.A., Stamatiou D-R.I. and Ntalla A.C. (2015), An
Application of AHP in the Development Process of a Supply Chain Reference Model focusing on Demand
Variability, Operational Research - An International Journal, 15 (3): 337-357, doi: 10.1007/s12351-014-
0163-8.
Ribas, G., Leiras, A. & Hamacher, S. (2011), Tactical Planning of the Supply Chain: Optimization Under
Uncertainty, XLIII Simpasio Brasileiro de Pesquisa Operacional, 15, (18).
Rosemann M. and van der Aalst W.M.P. (2007), A Configurable Reference Modelling Language,
Information Systems, 32 (1): 1-23.
Saad S., Udin Z.M. and Hasnan N. (2014), Dynamic Supply Chain Capabilities: A Case Study in Oil and
Gas Industry, International Journal of Supply Chain Management, 3 (2): 70-76.
Scheer, A.W. (1999), ARIS-business process frameworks, Berlin: Springer Verlag
Shah N.K., Li Z. and Ierapetritou M.G. (2011), Petroleum refining operations: Key issues, advances, and
opportunities, Industrial and Engineering Chemistry Research, 50: 1161-1170.
281
ON A DUOPOLY GAME WITH HOMOGENEOUS PLAYERS AND A QUADRATIC DEMAND
FUNCTION
GEORGES SARAFOPOULOS, Democritus University of Thrace
ABSTRACT
In this study we investigate the dynamics of a nonlinear discrete-time duopoly game, where the players
have homogeneous expectations. We suppose that the demand is a quadratic function and the cost function
is linear. The game is modeled with a system of two difference equations. Existence and stability of
equilibria of this system are studied. We show that the model gives more complex chaotic and
unpredictable trajectories as a consequence of change in the speed of adjustment of the players. If this
parameter is varied, the stability of Nash equilibrium is lost through period doubling bifurcations. The
chaotic features are justified numerically via computing Lyapunov numbers and sensitive dependence on
initial conditions.
KEYWORDS: Cournot duopoly game; Discrete dynamical system; Homogeneous expectations; Stability;
Chaotic Behavior.
JEL Classification: C62, C72, D43.
1. INTRODUCTION
An Oligopoly is a market structure between monopoly and perfect competition, where there are only a few
number of firms in the market producing homogeneous products. The dynamic of an oligopoly game is
more complex because firms must consider not only the behaviors of the consumers, but also the reactions
of the competitors i.e. they form expectations concerning how their rivals will act. Cournot, in 1838 has
introduced the first formal theory of oligopoly. He treated the case with naive expectations, so that in every
step each player (firm) assumes the last values that were taken by the competitors without estimation of
their future reactions.
Expectations play an important role in modelling economic phenomena. A producer can choose his
expectations rules of many available techniques to adjust his production outputs. In this paper we study the
dynamics of a duopoly model where each firm behaves with homogeneous expectations strategies. We
consider a duopoly model where each player forms a strategy in order to compute his expected output.
Each player adjusts his outputs towards the profit maximizing amount as target by using his expectations
rule. Some authors considered duopolies with homogeneous expectations and found a variety of complex
dynamics in their games, such as appearance of strange attractors (Agiza, 1999, Agiza et al., 2002, Agliari
et al., 2005, 2006, Bischi, Kopel, 2001, Kopel, 1996, Puu,1998). Also models with heterogeneous agents
were studied (Agiza, Elsadany , 2003, 2004, Agiza et al., 2002, Den Haan , 20013, Fanti, Gori, 2012,
Tramontana , 2010, Zhang , 2007).
In the real market producers do not know the entire demand function, though it is possible that they
have a perfect knowledge of technology, represented by the cost function. Hence, it is more likely that
firms employ some local estimate of the demand. This issue has been previously analyzed by Baumol and
Quandt, 1964, Puu 1995, Naimzada and Ricchiuti, 2008, Askar, 2013, Askar, 2014. Efforts have been
made to model bounded rationality to different economic areas: oligopoly games (Agiza, Elsadany, 2003,
Bischi et al, 2007); financial markets (Hommes, 2006); macroeconomic models such as multiplier-
accelerator framework (Westerhoff,2006). In particular, difference equations have been employed
extensively to represent these economic phenomenons (Elaydi, 2005; Sedaghat, 2003). Bounded rational
players (firms) update their production strategies based on discrete time periods and by using a local
estimate of the marginal profit. With such local adjustment mechanism, the players are not requested to
have a complete knowledge of the demand and the cost functions (Agiza, Elsadany, 2004, Naimzada,
Sbragia, 2006, Zhang et al, 2007, Askar, 2014). All they need to know is if the market responses to small
production changes by an estimate of the marginal profit. The paper is organized as follows: In Section 2,
the dynamics of the duopoly game with homogeneous expectations, quadratic demand and linear cost
283
functions is analyzed. The existence and local stability of the equilibrium points are also analyzed. In
Section 3 numerical simulations are used to show complex dynamics via computing Lyapunov numbers,
and sensitive dependence on initial conditions.
2. THE GAME
In oligopoly game players can choose simple expectation rules such as naïve or complicated as adaptive
expectations and bounded rationality. The players can use the same strategy (homogeneous expectations)
or can use different strategy (heterogeneous expectations). In this study we consider two boundedly
rational players such that each player think with the same strategy to maximize his output. We consider a
simple Cournot-type duopoly market where firms (players) produce homogeneous goods which are perfect
substitutes and offer them at discrete-time periods t 0,1, 2,... on a common market. At each period t, every
firm must form an expectation of the rival’s output in the next time period in order to determine the
corresponding profit-maximizing quantities for period t 1. The inverse demand function of the duopoly
market is assumed quadratic and decreasing:
P a b(x y)2 (1)
and the cost functions are:
C1(x) cx, C2 ( y) cy (2)
where Q x y is the industry output and a,b, c 0 . With these assumptions the profits of the local firms
are given by
1(x, y) x[a b(x y)2] cx (3)
2 (x, y) y[a b(x y)2] cy
Then the marginal profit of the firm at the point (x, y) of the strategy space is given by
1 a c b(x y)2 2b(x y)x (4)
x
2 a c b(x y)2 2b(x y) y
y
We suppose that each first firm decides to increase its level of adaptation if it has a positive marginal
profit, or decreases its level if the marginal profit is negative (bounded rational player). If k > 0 the
dynamical equations of the players are:
x(t 1) x(t) k 1 , y(t 1) y(t) k 2 (5)
x y
The dynamical system of the players is described by
x(t 1) x(t) k[a c b(x y)2 2b(x y)x] (6)
y(t 1) y(t) k[a c b(x y)2 2b(x y) y]
We will focus on the dynamics of the system (6) to the parameter k
2.1 The equilibria of the game
The equilibria of the dynamical system (6) are obtained as nonnegative solutions of the algebraic system
284
a c b(x y)2 2b(x y)x 0 (7)
a c b(x y)2 2b(x y) y 0
which obtained by setting x(t 1) x(t), y(t 1) y(t) in Eq. (6) and we can have one equilibrium
E* (x*, y*) , where
1
x* y* ac 2 (8)
8b
The equilibrium E* is called Nash equilibrium, provided that a c .
The study of the local stability of equilibrium solution is based on the localization on the complex plane of
the eigenvalues of the Jacobian matrix of the two dimensional map (Eq. (9)).
f (x, y) x k[a c b(x y)2 2b(x y)x] (9)
g(x, y) y k[a c b(x y)2 2b(x y) y]
In order study the local stability of equilibrium points of the model Eq.(6), we consider the Jacobian matrix
along the variable strategy (x, y)
J ( x, y) f x ( x, y) fy (x, y) 1 2bk(3x 2 y) 2bk(2x y) (10)
gx (x, 2bk(x 2y) 1 2bk(2x 3y)
gx (x, y)
y)
The Nash equilibrium E* is locally stable if the following conditions are hold
1 T D 0 (11)
1 T D 0
1 D 0
where T 2 20kbx* is the trace and D 64(kbx*)2 20kbx* 1 is the determinant of the Jacobian matrix
J (E*) fx (E*) f y (E*) 1 10kbx* 6kbx* (12)
gx (E*) gx (E*) 6kbx* 110kbx*
The first condition
1T D 0 64(kbx*)2 0 (13)
is always satisfied
The second and third conditions are the conditions for the local stability of Nash equilibrium which
becomes:
1 TD 0 64(kbx*)2 40kbx* 4 0 (14)
1 D0 kbx*(20 64kbx*) 0
From Eq.(14) it follows that the Nash equilibrium is locally stable if
285
0 kbx* 0.125 0 k 0.125 (15)
bx*
3. NUMERICAL SIMULATIONS
To provide some numerical evidence for the chaotic behavior of the system Eq. (6), as a consequence of
change in the parameters k , we present various numerical results here to show the chaoticity, including its
bifurcations diagrams, Lyapunov numbers and sensitive dependence on initial conditions (Kulenovic, M.,
Merino, O., 2002). In order to study the local stability properties of the equilibrium points, it is convenient
to take a 12,b 1, c 2 . In this case x* 1.25 . Numerical experiments are computed to show the
bifurcation diagram with respect to k and the Lyapunov numbers . Fig. 1 show the bifurcation diagrams
with respect to the parameter k of the orbit of the point (0.1,0.1). In this figure one observes complex
dynamic behavior such as cycles of higher order and chaos. Fig. 3 show the Lyapunov numbers of the
same orbit for k 0.16 . From these results when all parameters are fixed and only k is varied the structure
of the game becomes complicated through period doubling bifurcations, more complex bounded attractors
are created which are aperiodic cycles of higher order or chaotic attractors.
To demonstrate the sensitivity to initial conditions of the system Eq.(6), we compute two orbits with
initial points (0.1, 0.1) and (0.1, 0.1001), respectively. Fig. 2 shows sensitive dependence on initial
conditions for y-coordinate of the two orbits, for the system Eq.(6), plotted against the time with the
parameters values a 12,b 1, c 2, k 0.16 . At the beginning the time series are indistinguishable; but
after a number of iterations, the difference between them builds up rapidly. From Fig. 2 we show that the
time series of the system Eq. (6) is sensitive dependence to initial conditions, i.e. complex dynamics
behaviors occur in this model.
Fig.1. Bifurcation diagrams with respect to the parameter k against variable x or y for a 12,b 1, c 2
with 850 iterations of the map Eq (7).
Fig.2. Sensitive dependence on initial conditions, for y–coordinate plotted against the time: The two orbits
orb.(0.1, 0.1) (left) and orb.(0.1, 0.1001) (right), for the system (6), with the parameters values a=12,b=1, c
= 2, k= 0.16
286
Fig.3. Lyapunov numbers versus the number of iterations of the orbit of the point (0.1, 0.1), for a=12, b=1,
c = 2, k= 0.16
4. CONCLUSION
In this paper, we analyzed through a discrete dynamical system based on the marginal profits of the
players, the dynamics of a nonlinear discrete-time duopoly game, where the players have homogeneous
expectations. We suppose that the cost function is linear and the demand function is quadratic. The
stability of equilibria, bifurcation and chaotic behavior are investigated. We show that a parameter (speed
of adjustment) may change the stability of equilibrium and cause a structure to behave chaotically. For low
values of this parameter there is a stable Nash equilibrium. Increasing these values, the equilibrium
becomes unstable, through period-doubling bifurcation.
REFERENCES
Agiza HN, (1998), Explicit stability zones for Cournot games with 3 and 4 competitors. Chaos Solitons
Fract. 9: 1955-66.
Agiza HN. On the stability, bifurcations, chaos and chaos control of Kopel map. Chaos Solitons Fract. 11:
1909–16.
Agiza HN, Elsadany AA, (2004) Chaotic dynamics in nonlinear duopoly game with heterogeneous players.
Appl. Math. Comput. 149: 843–60.
Agiza HN, Elsadany AA., (2003) Nonlinear dynamics in the Cournot duopoly game with heterogeneous
players. Physica A 320: 512–24.
Agiza HN, Hegazi AS, Elsadany AA.m(2002). Complex dynamics and synchronization of duopoly game
with bounded rationality. Math. Comput. Simulat. 58: 133–46.
Askar, S.S., (2013). On complex dynamics of monopoly market, Economic Modelling, 31: 586-589.
Askar, S. S., (2014) Complex dynamic properties of Cournot duopoly games with convex and log-concave
demand function, Operations Research Letters 42, 85–90
Baumol, W.J., Quandt, R.E., (1964). Rules of thumb and optimally imperfect decisions, American
Economic Review 54 (2): 23–46.
Bischi GI, Kopel M.(2001). Equilibrium selection in a nonlinear duopoly game with adaptive expectations.
J. Econom Behav. Org. 46: 73–100.
Bischi GI, Lamantia F, Sbragia L.(2004). Competition and cooperation in natural resources exploitation:
an evolutionary game approach. In: Cararro C, Fragnelli V, editors. Game practice and the
environment. Cheltenham: Edward Elgar; 187–211.
Bischi GI, Naimzada A.(2000). Global analysis of a dynamic duopoly game with bounded rationality. In:
Filar JA, Gaitsgory V, Mizukami K, editors. Advances in dynamic games and applications, vol. 5.
Basel: Birkhauser; 361–85.
Bischi, G.I., Naimzada, A.K., Sbragia, L., (2007). Oligopoly games with local monopolistic
approximation, Journal of Economic Behavior and Organization 62 (3): 371–388.
Bischi, G.I., Kopel, M., (2001). Equilibrium selection in a nonlinear duopoly game with adaptive
expectations. Journal of Economic Behavior & Organization 46, 73–100.
287
Bischi, G.I., Naimzada, A., (1999), Global analysis of a dynamic duopoly game with bounded rationality.
In: Filar JA, Gaitsgory V, Mizukami K, (eds). (2000). Advances in dynamic games and applications,
vol. 5. Basel: Birkhauser: 361-385.
Cournot A. Researches into the mathematical principles of the theory of wealth. Homewood (IL): Irwin;
1963.
Day, R., (1994). Complex Economic Dynamics. MIT Press, Cambridge.
Dixit, A. K., (1986), Comparative statics for oligopoly, Internat. Econom. Rev. 27, 107–122.
Dixit, A.K., (1979). A model of duopoly suggesting a theory of entry barriers. Bell Journal of Economics
10, 20–32.
Den Haan WJ. (2001).The importance of the number of different agents in a heterogeneous asset-pricing
model. J. Econom. Dynam. Control, 25:721–46.
Elaydi, S., (2005). An Introduction to Difference Equations, third ed., Springer-Verlag, New York.
Hommes, C.H., (2006). Heterogeneous agent models in economics and finance, in: L. Tesfatsion, K.L.
Judd (Eds.), Handbook of Computational Economics, Agent-Based Computational Economics, vol. 2,
Elsevier Science B.V: 1109–1186.
Gandolfo G.(1997) Economic dynamics. Berlin: Springer
Gao Y. (2009).Complex dynamics in a two dimensional noninvertible map. Chaos Solitons Fract. 39:
1798–810.
Kopel M. (1996).Simple and complex adjustment dynamics in Cournot duopoly models. Chaos Solitons
Fract. 12: 2031–48.
Kulenonic, M., Merino, O.,(2002). Discrete Dynamical Systems and Difference Equations with
Mathematica, Chapman & Hall/Crc.
Medio A, Gallo G. (1995).Chaotic dynamics: theory and applications to economics. Cambridge (MA):
Cambridge University Press.
Medio A, Lines M. (2005). Introductory notes on the dynamics of linear and linearized systems. In: Lines
M, editor. Nonlinear dynamical systems in economics. SpringerWienNewYork: CISM; 1–26.
Medio A, Lines M. (2001).Nonlinear dynamics. A primer. Cambridge (MA): Cambridge University Press.
Naimzada, A.K., Ricchiuti G., (2008). Complex dynamics in a monopoly with a rule of thumb, Applied
Mathematics and Computation 203: 921–925
Naimzada, A., Sbragia, L., (2006). Oligopoly games with nonlinear demand and cost functions: two
boundedly rational adjustment processes, Chaos Solitons Fractals 29, 707–722.
Puu, T., (1995). The chaotic monopolist, Chaos, Solitons & Fractals 5 (1): 35–44.
Puu T. (1998). The chaotic duopolists revisited. J Econom. Behav. Org. 37: 385–94.
Puu T. (1991). Chaos in duopoly pricing. Chaos Solitons Fract.1:573–81.
Puu T. (2005). Complex oligopoly dynamics. In: Lines M, editor. Nonlinear dynamical systems in
economics. Springer Wien NewYork: CISM; p. 165–86.
Sedaghat, H.,(2003). Nonlinear Difference Equations: Theory with Applications to Social Science Models,
Kluwer Academic Publishers (now Springer).
Singh, N., Vives, X., (1984). Price and quantity competition in a differentiated duopoly. The RAND
Journal of Economics 15, 546–554.
Tramontana, F., (2010). Heterogeneous duopoly with isoelastic demand function. Economic Modelling 27,
350–357.
Westerhoff, F.,(2006). Nonlinear expectation formation, endogenous business cycles and stylized facts,
Studies in Nonlinear Dynamics and Econometrics 10 (4) (Article 4).
Wu, W., Chen, Z., Ip, W.H., (2010). Complex nonlinear dynamics and controlling chaos in a Cournot
duopoly economic model. Nonlinear Analysis: Real World Applications 11, 4363–4377.
Zhang, J., Da, Q.,Wang, Y., (2007). Analysis of nonlinear duopoly game with heterogeneous players.
Economic Modelling 24, 138–148.
288
STRATEGIC COMPETITION ANALYSIS AND GROUP MAPPING: THE CASE OF THE GREEK
INSURANCE INDUSTRY
YIANNIS YIANNAKOPOULOS, Hellenic Open University
ANASTASIOS MAGOUTAS, Hellenic Open University
PANOS CHOUNTALAS, Hellenic Open University
ABSTRACT
This study aims to investigate the use of classic strategic management models for the competition analysis
of the Greek insurance industry. In this direction, the application of the macro-environment analysis
model, the industry life cycle model and the Porter’s five forces model is concisely described. Special
focus is put on the application of the strategic group mapping model, in order to examine the comparative
positions of rival firms. The aforementioned models are applied to the Greek insurance industry for the
decade 2004-2013 and conjointly reveal the factors that force and transform the competition environment.
In this study, five specific research questions are investigated: (i) What kind of competitive forces are
Greek industry members facing, and how strong is each force? (ii) What factors are driving changes in the
Greek insurance industry, and what impact will these changes have on competitive intensity? (iii) What
market positions do industry rivals hold; who is strongly positioned and who is not? (iv) To what extent
can the selected models help an analyst to identify significant features, opportunities and threats that shape
the competition in the Greek insurance market? (v) Is it feasible for a strategist to analyze the Greek
insurance industry using the selected models with only published information? Results demonstrate that an
analyst using classic strategic models can monitor the competitive dynamics and identify opportunities and
risks for the Greek insurance companies.
Keywords: strategic management models; competition analysis; strategic group mapping; insurance
industry.
1. INTRODUCTION
Competition in an economic context is a widely studied phenomenon with a significant body of
accumulated research and theory. A critical review of the strategic management literature shows that
competition analysis is one of the most important aspects in developing strategy. It is well-known that, the
formulation of company’s strategy starts with the analysis of the factors and forces that shape the
competition in the industry in which the firm operates (Hill and Jones, 2012). Understanding the
environment in which a company operates, is a vital part of strategic planning. Rivals should be analyzed
in-depth and systematically, in order to identify the opportunities and threats firms are facing and
formulate strategies that will enable the company to outperform its rivals.
On the other hand, the role of insurance is important not only for the economy but also for the society.
Insurance manages, diversifies and absorbs the financial risks of individuals and organizations. It allows
individuals to recover from sudden misfortune by limiting the financial burden through mitigation of the
effects of exogenous events over which man has no control like illness, accident, death, and natural
disasters (The Geneva Association, 2014).
To evaluate the competition in an insurance industry an insurer can use strategic management models.
Through this evaluation, an insurance company obtains information about the competition, uses this
information to predict competitor’s behavior, and assesses the brand positioning in the market.
Additionally, identifying the competition in insurance market can reveal value proposition, competitors’
strategies, their weaknesses and strengths, as well as the opportunities which arise from timely
implementation of appropriate strategies.
The purpose of this study is to investigate the use of classic strategic management models for the
competition analysis of the Greek insurance industry (i.e. macro-environment analysis model, industry life
289
cycle model, Porter’s five forces model and strategic group mapping model). Five specific research
questions are investigated:
1. What kind of competitive forces are Greek industry members facing, and how strong is each force?
2. What factors are driving changes in the Greek insurance industry, and what impact will these changes
have on competition?
3. What market positions do industry rivals hold – who is strongly positioned and who is not?
4. To what extent can the selected models help an analyst to identify significant features, opportunities and
threats that form competition in the Greek insurance market?
5. Is it feasible for a strategist to analyze the Greek insurance industry using the selected models with only
published information?
The remainder of this paper is organized as follows. Section 2 presents a comprehensive selection of the
appropriate related bodies of literature. In Section 3 the methodology of this study is described. Section 4
contains the fundamental strategic analysis of the competition in Greek insurance industry based on macro-
environment analysis model, industry life cycle model and Porter’s five forces model. Section 5 contains
competition analysis based on the strategic group mapping model. Section 6 summarizes research results
and gives several interpretations. Finally, in Section 7 the main conclusions are presented and suggestions
for further research are also provided.
2. LITERATURE REVIEW
This section provides a strong theoretical basis for the study by analyzing and composing a comprehensive
selection of appropriate related bodies of literature. It presents the relevant theory and previous research on
the analysis of an organization’s external environment in the context of strategic management with
particular focus on competition analysis. Michael Porter’s study in competition analysis is the reference
point in most of the parts of this section.
Porter (1980) stated that "strategy can be viewed as building defense against competitive forces or as
finding positions in the industry where forces are weakest", and added that "strategy is about making
choices, trade-offs; it's about deliberately choosing to be different". An ultimate goal of a strategy is to
gain strategic competitiveness. Strategic competitiveness is achieved when a firm successfully formulates
and implements a value-creating strategy (Volberda et al. 2011). According to Sirmon et al. (2007), a firm
has a competitive advantage when it implements a strategy where competitors are unable to duplicate or
find it too costly to try to copy.
The following strategic management models for the competition analysis are critically reviewed
consequently:
Macro-environment analysis model
Industry life cycle model
Porter’s five forces model
Strategic group mapping model
2.1 Macro-environment analysis model
The company and its rivals operate in a macro-environment of forces, which create opportunities and pose
threats (Kotler and Armstrong, 2012). According to Thompson et al. (2013), the macro-environment has
seven components:
Demographics (outcomes of changes of consumer demographics such as population, gender, age
distribution, and race)
Socio-cultural forces (outcomes of changes of role and status in society, marital status, education,
language, and local origin)
Political, legal and regulatory factors
Natural environment
Technological factors
290
Global forces (outcomes of changes in the global market place)
General economic conditions
Other well-known variations to this model are PESTLE, PEST, STEEP, and STEEPLE. All are macro-
environmental analysis models which focus on important factors of external environment that impact the
present and the future of a firm, such as Political, Economic, Social, Technological, Legal, Environmental/
Ecological and Ethical.
Industry and competitive environment are also part of the company’s external environment. According to
Hill and Jones (2012), an industry is a group of companies offering products or services which are close
substitutes of each other and satisfy similar customer needs. The examination of an industry’s competitive
environment is the beginning of identifying a company’s strategy (Rumelt, 2011) and thus it is essential for
strategic management process.
Main limitations of macro-environment models are the factors’ continuous changes, the uncertainty for the
future impact to a company, and bias.
2.2 Industry life cycle model
A model for assessing the competitive structure of the company’s industry is the industry life cycle model.
One of the most frequently used models of an industry life cycle was introduced by Porter (1980). This
model comprises four stages: Introduction (emergence), Growth, Maturity, and Decline. According to
Baum and McGahan (2004), it is important for companies to understand the use of the industry lifecycle
because it is a survival tool for businesses to compete in the industry effectively and successfully.
The main concerns on the model are based on the continuous changing environment, the unknown
boundaries of the four stages and the diversity of an industry which do not allow the application of the
model in every industry.
2.3 Porter’s five forces model
In most industries, there are strong competitive forces which reduce economic profits over time. The most
powerful and widely used model for systematically diagnosing these competitive pressures in a market is
Porter’s five forces model of competition (Porter 1980; 1985). The five competitive forces are:
The risk of new entry by potential competitors: Potential competitors are companies that have the
capability to enter in an industry but are not there yet (Brickley et al., 2008). These new entrants
represent a threat to the profitability of established companies and most of the time they use
substantial resources to gain market share (Porter, 1996). Established companies already operating in
an industry discourage potential competitors from entering the industry because the more companies
that enter, the more difficult it becomes for established companies to protect their market share and
their profitability (Hill and Jones, 2012). The risk of entry by potential competitors is a function of the
height of barriers to entry. Barriers to entry are all the factors that make it costly for companies to
enter an industry. High entry barriers may keep potential competitors out of an industry even when
industry profits are high (Hill and Jones, 2012).
The extent of rivalry among established firms: The strongest of Porter’s five competitive forces is the
rivalry among existing firms. It is a continuous and dynamic threat that evolves over time. According
to Porter (2008) the intensity of rivalry among established companies depends on the industry and is
growing slowly or declining.
The bargaining power of buyers: An industry’s buyers may be individual customers (end users of a
product or service) or companies which distribute an industry’s product to end users, such as retailers
and wholesalers (Hill and Jones, 2012). Buyers with strong bargaining power can limit industry
profitability by demanding price reduction, better payment terms or additional features and services
291
that increase industry members’ costs (Thompson et all, 2013). Therefore, powerful buyers should be
viewed as a threat.
The bargaining power of suppliers: Suppliers are the organizations, which provide resources to the
industry, such as materials, services, and labor (i.e. individuals, organizations such as labor unions, or
companies that supply contract labor) (Hill and Jones, 2012). The bargaining power of suppliers refers
to the ability of suppliers to raise resources’ prices, or to increase costs of the industry; thus, powerful
suppliers are a threat (Hill and Jones, 2012).
The threat of substitute products: Substitute products are goods or services which do not belong to the
same industry that perform similar or the same functions as a product that the industry produces (Hitt
et all, 2011). The existence of close substitutes is a strong competitive threat because it limits the price
that companies of the same industry can charge for their product, and consequently industry
profitability (Hill and Jones, 2012).
Nevertheless, this well–known model has suffered criticism on issues such as the assumptions underlying
the five forces, the absence of additional forces like regulation and government, the absence of the role of a
citizen and consumer behavior, and the absence to evaluate the company in a continuous changing
environment.
2.4 Strategic group mapping model
The best technique to use in order to reveal the market positions of industry members is strategic group
mapping (Porter, 1980). Hunt (1972) observed differences between groups of firms within industries and
named those groups as “strategic groups” to describe "a group of firms within the industry that are highly
symmetric with respect to cost structure, the degree of vertical integration, and the degree of product
differentiation, formal organization, control systems, management rewards/punishments, and the personal
views and preferences for various possible outcomes" (Hunt, 1972, p.8). Porter (1980) developed further
Hunt’s concept. There are threats and opportunities within and across strategy groups. For example, an
immediate threat can be customers’ perception that products in the same strategic group are direct
substitutes for each other and therefore, company’s closest competitors are those in same strategic group. It
is important to note that the analysis of strategy groups within an industry gives different result (Hill and
Jones, 2012).
A strategic group consists of firms with similar competitive approaches and positions in an industry. The
major characteristics are defined by Thomas and Venkatraman (1988). Thompson et all (2013) also
elaborated into this and refers that organizations in the same strategic group may:
have similar or the same products, prices, quality, and distribution channels
react similar to a threat or opportunity
use the same product attributes to attract similar customer segments
use similar strategies
depend on identical technological techniques
offer similar customer service
According to Porter (1980), strategic groups are stable structural features of industries that are bounded by
mobility barriers. There are four steps to construct a strategic group map: (i) identify the competitive
variables that distinguish companies; (ii) plot firms on a two-variable map with pairs of characteristics; (iii)
assign companies to the strategic groups; (iv) draw circles around each strategic group. With strategic
group maps, analysts and strategists can reveal the closest rivals for each company within an industry and
identify attractive or unattractive positions within the industry.
The main criticisms against this model are the limited numbers of variables, the selection of the variables,
and the variability of the strategic group within an industry and within a country.
292
3. METHODOLOGY
Greek insurance market analysis is based on data and reports from the Hellenic association of insurance
companies. It is important to note that insurance companies which are reported to Hellenic association of
insurance companies represented the 94,6% of total GWP in 2014. We should note that
INTERSALONIKA doesn’t contribute to the Hellenic association of insurance companies report.
Moreover, the analysis of the insurance firms that operate in Greece was based on 2013 report of the
Hellenic association of Greek insurance companies that had detailed production for each company. Our
analysis does not include two insurance companies; EVIMA and DIETHNIS ENOSI that stopped operate
in 2013. These companies represented 3,1% of total gross written insurance premiums in 2012. We should
also note that we noticed small differences in the figures from Hellenic association of Greek insurance
industries. Some of its reports issued with slightly different results. Furthermore, the last available data for
insurance production per company from the association was 2012. When we refer to brands we used the
top 25 Greek insurance brands that represented more than 98% of the insurance production in Greece for
2012. We also need to mention that we didn’t distinguish the companies on personal lines (individuals) and
commercial lines (groups) for the interpretation of the results.
In order to measure the extent of the involvement in Life and Non-Life sector of an insurance company, we
created 3 variables:
LIFEtoTOTAL = (Life GWP) / (Total GWP)
NONLIFEtoTOTAL = (NonLife GWP) / (Total GWP)
NONLIFEINVOLV = (NONLIFEtoTOTAL) - (LIFEtoTOTAL)
The LIFEtoTOTAL indicator measures the percentage of Life insurance production to the total production
of an insurance company. Respectively, NONLIFEtoTOTAL indicator measures the percentage of Non-
Life insurance production to the total production.
The NONLIFEINVOLV indicator measures the extent of Non-Life involvement for an insurance
company. If it is 0 then company’s production is split equal in Life and Non-Life insurance sector. If it is 1
then the company has only Non-Life production and if it is -1 then the company has only Life insurance
production. If the result is positive then the company has more production in Non-Life sector than in Life,
and vice versa if the result is negative.
4. FUNDAMENTAL STRATEGIC ANALYSIS
The aim of this section is to apply strategic management models for the competition analysis of the Greek
insurance industry. We start with the macro- environment model in order to examine the forces that are
shaping and forming the competition in the Greek insurance industry. Next, we examine Greek insurance
industry life cycle in order to identify the characteristics that determine the competition. Finally, we use
Porter’s five forces model of competition.
4.1 Analysis of the Greek insurance industry with the macro-environment model
We start the analysis of the Greek insurance industry with the macro-environment model. The macro-
environment consists of components that originate outside of a firm and have the potential to influence the
total industry as a whole. Forces that we focus are: demographics, general economic conditions, socio-
cultural forces, political, legal and regulatory factors, natural environment, technological, and global
factors.
The most strategically relevant macro-environment factors which formulate the competition of the Greek
insurance industry in the examined period are presented to the Figure 1. Some of the factors can be placed
in another category without affecting the analysis. Also, some of these factors could be applied into
Porter’s five forces model of Greek insurance industry competition that follows.
293
Figure 1: Macro-environment factors in the Greek insurance industry
The analysis of the seven components of the micro-environment is a well-structured method that leads the
analysts to identify the rivals which need more attention. Using the macro-environment model, strategists
and managers can determine which of the factors are important or have great impact on their companies’
business model, current business plan, long and short-term strategy moves. For example, a closer look on
the dynamics and the penetration of direct distribution channels in Europe could give early warnings to
INTERAMERICAN competitors before INTERAMERICAN introduced the first insurance direct channel
in Greece (ANYTIME).
It is important to note that events in the macro-environment may occur in a sequence or in parallel, slowly
or rapidly, simultaneously to all components or to only one, with or without a sign. The role of the
management is to analyze it in a regular base to understand and predict competitors’ strategies or to create
new directions and strategies in order to gain the competitive advantage in the industry.
4.2 Analysis of the Greek insurance industry with industry life cycle model
According to Porter (1998), as an industry goes through its life cycle, the nature of competition will shift.
The Greek insurance industry life cycle seems to be in the maturity stage. Although the number of
insurance companies decreased during 2000 to 2013 from 110 to 67 (Hellenic association of insurance
companies, 2014), the total insurance spending as percentage of GDP stabilized between 2% to 2,2% for
the decade 2004-2013 (OECD, 2013).
The characteristics of the Greek insurance industry are consistent, at least for the vast majority of the
insurance production (Life, Motor, Health, and Property) with Porter’s study (1998). Table 1 depicts the
characteristics of the Greek insurance industry in the maturity stage.
Demand Maturity life cycle stage characteristics
Mass market saturation especially in Motor business, Repeat
buying, Customers are price sensitive for mandatory insurance
products like motor, home and liabilities.
294
Technology Well-diffused technical knowhow: quest for technological
improvements
Products Trend to commoditization, Attempts to differentiate by branding,
quality, bundling, Less product differentiation, Less rapid product
Manufacturing differentiation, Standardization, Market segmentation, Advertising
and Distribution competition especially in Motor business
Competition Multi-line of business and multi-distribution exists
competitive costs or net combined ratio is a key issue, Price
competition increases
Key Cost efficiency through capital intensity, Margins, Lower prices
success
factors
Table 1: The industry structure over the Greek insurance lifecycle stage
An interesting question that still needs to be answered is: “Has the Greek insurance industry reached the
top of its industry lifecycle?” An answer to this question might give the insurance penetration index which
in Greece was 2,1%, when the average European ratio was 7.7% in 2013 (Insurance Europe, 2015).
4.3 Analysis of the Greek insurance industry competition with Porter’s five-forces model
Porter’s five forces model is a powerful and tested model that allows companies to identify and analyze the
important forces which determine the characteristics of an industry. This model helps companies assess the
nature of an industry’s competitiveness and develop corporate strategies accordingly. Any change in one of
the forces might mean that the insurer has to re-evaluate its environment and realign its business practices
and strategies.
In the analysis of macro-environment of Greek insurance industry in a previous section important factors
are described that influence the intensity of each Porter’s forces. The most strategically relevant factors
that formulate the competition of the Greek insurance industry according to Porter’s five forces model in
the examined period are presented at Figure 2. Using Porter’s five forces, strategists and managers can
determine which of the forces have the greater impact for companies’ business model, current business
plan, long and short-term strategy moves.
295
Figure 2: Porter’s five forces (Greek insurance industry)
5. STRATEGIC GROUP MAPPING
As Porter claimed (1980), strategic group mapping is the best technique to reveal industry members’
market positions. It can be used in Greek insurance industry to identify companies or groups of companies
with similar characteristics and illustrate how easy it might be to move an insurance company from one
strategic group to another, to avoid threats and to exploit opportunities.
Fiengebaum and Thomas (1990) used, among other variables, net written premiums and line of business
(Life, Non-life) in order to analyze strategic groups and performance in the US insurance industry.
Similarly, we used the gross written premiums as the variable which demonstrates the size of the Greek
insurance company and the line of business as a diversification variable to distinguish insurance brands.
For the study’s purposes, we tried four different strategic group mappings:
• Domestic and foreign insurance strategic groups’ map
• Non-Life, Life, and both Non-Life and Life insurance sectors strategic groups’ map
• Years in Greek insurance industry strategic groups’ map
• Life and Non-Life % production strategic groups’ map
All strategic maps are based on the Gross written premiums of the top 25 Greek insurance brands. The
analysis is based on 2012 data.
5.1 Domestic and foreign insurance strategic groups’ map
Figure 3 shows the strategic groups for which the country of origin is used as the main characteristic.
Insurance brands were divided to domestic and foreign and compared based on total gross insurance
premiums. The most important element for this classification is the potential dynamic that a foreign
company might have in terms of investments, international experience, transfers of technological and
296
managerial know-how. Someone can argue that foreign insurers might dominate a domestic market if
domestic insurers are inadequate and unsophisticated.
This classification revealed groups of companies which are very close in terms of insurance production. A
point that needs attention is that the domestic market is already well-served by locally owned insurers.
ETHNIKI is by far the top brand not only among domestic brands but also from all insurance brands in
Greece. In the second position is a Dutch company; INTERAMERICAN that is also the top brand from the
foreign companies. ING and METLIFE are in the second and the third position among foreign insurance
companies. On the other hand, EUROLIFE is in the second position of domestic brands having less than
the half of ETHNIKI’s insurance premiums. As Figure 3 depicts, in Greek insurance industry there are
many companies with similar insurance production volumes. Considering the above, 8 groups created, 4
for domestic and 4 for foreign companies with similar dynamics in terms of insurance production. Many
companies are in group 4 for both domestic and foreign companies with the lower insurance production.
Figure 3: Strategic group map: Domestic versus Foreign insurance firms (Greece)
5.2 Lines of insurance business strategic groups’ map
For the strategic groups of Figure 4 the Line of business is used as the main characteristic to group the
insurers. Insurance brands were divided into Life, Non- Life and Mixed (Life and Non-Life) and compared
with the total gross insurance premiums. The most important element for this classification is the
companies’ expertise in Life and Non-Life insurance sector. This classification revealed groups of
companies which are very close in terms of insurance production.
Not many firms are among insurance companies which operate both in Life and Non-Life insurance. Most
of the leaders in Greek insurance market are in there. We can identify 4 groups. The first group consists of
the market leader ETHNIKI. The second group consists of INTERAMERICAN, and EUROLIFE. There
was a considerable gap between the first and the two following brands. The difference in insurance
production between ETHNIKI and INTERAMERICAN is €204m and the difference with EUROLIFE is
€322m. The third group consists of brands below category’s average and with an insurance production
between €138m GWP and €203m GWP. The leaders in this category are ALLIANZ and AXA with
297
€202,4m and €194,7m gross written premiums respectively. Finally, the fourth group includes the rest of
the companies with insurance production less than €88m GWP.
In Non-Life sector we can identify 3 groups. The first group consists of ETHNIKI and
INTERAMERICAN with €295,4m GWP and €225,6m GWP respectively. The second group consists of 8
brands between €108m and €153 mil. GWP. The leader in this group is INTERSALONIKA with €152,8m
GWP. The third group includes the rest of the companies that had less than €86 mil GWP.
In Life sector, the top companies are close in insurance production. We can divide the sector in 2 groups.
The first group consists of ETHNIKI, ING, METLIFE, EUROLIFE, INTERAMERICAN, CREDIT
AGRICOLE and the production is between €161m to €336m. The second group consists of the rest of the
brands with less than €99m GWP.
Figure 4: Strategic group map: Life, Non-Life and both
It is interesting to note that ETHNIKI is the leader brand in each category. Nevertheless, in Life the
competitors are very close to the leading company. Another point that needs to be made is that leading
companies from each category are far away from the rest of companies.
5.3 Years in Greek insurance industry strategic groups’ map
For these strategic groups the number of years that the brand operates in Greek insurance industry (see
Figure 5) is used as a criterion. The number of years in Greek insurance industry can be interpreted as
experience to Greek insurance market characteristics, better handling on political and tax issues, stronger
relationship with sales channels, deeper understanding of Greek consumers.
298
For this strategic group the establishment year of the company in Greece is used and not the year that the
company was merged or sold to the current brand. Furthermore, insurance brands were divided into 0 to 40
years, 40 to 80 years and 80 and more years in order to have better results.
Three companies operate in insurance industry more than 80 years. ETHNIKI, GROUPAMA and
GENERALI. ETHNIKI operates since 1891 and had 630,9 gross insurance premiums in 2012.
GROUPAMA and GENERALI had 167,9m and 143,7 respectively being far less than ETHNIKI.
Five companies operate between 40 and 80 years. Top three companies had a significant deviation and we
include them in the same group. INTERAMERICAN is a 46 years old company with 427m insurance
premiums, METLIFE with 263m GWP, and ERGO with 138m GWP. Only, INTERNATIONAL LIFE and
AIG had very close insurance production.
Thirteen companies operate less than 40 years. Top 2 companies, EUROLIFE and ING had similar
insurance production and formulate a group. The second group consists of ALLIANZ, AXA, ATE,
CREDIT AGRICOLE, EUROPEAN RELIANCE, and INTERSALONIKA with insurance production
between €122m and €203m. The third group consists of MINETTA, INTERLIFE, NP, and INTERASCO.
Figure 5: Strategic group map: Years in Greek insurance industry
5.4 Companies involvement in Life and Non-Life sector strategic group’s map
For the strategic groups of Figure 6 the NONLIFEINVOLV index is used (see methodology section) to
measure the extent of insurance companies’ involvement in Life and Non-Life sector and the GWP. The
groups which were pointed outhave the percentage of Non-Life (or Life) production as the basic
characteristic. In Group 1 we have the leaders in Greek insurance industry; ETHNIKI and
INTERAMERICAN which have almost a balanced Non-life and Life production. In the second Group
EUROLIFE is included with more Life production than Non-Life. The 3rd Group consists of companies
with none or almost none Non-Life production. In this group we include ING, METLIFE, CREDIT
AGRICOLE, ALPHA LIFE. The fourth Group consists of companies that mainly concentrate in Non-Life
sector but also have a significant percentage of their production in Life insurance. Final, the 5th group has
companies with Non-life insurance production and none or almost none Life production.
299
Figure 6: Involvement in Life and Non-Life sector strategic group maps (2012)
6. SUMMARY OF RESEARCH, RESULTS AND INTERPRETATION
Based on our research the summarized results are presented below:
Macro-environment analysis model offered a structured way to analyze the Greek insurance industry
competition environment. With this model a strategist can understand and identify risks and opportunities
associated with external factors that influence an insurance company’s competitive position. In addition,
this model can be used to forecast movements of its competitors and helps managers make better decisions.
Nevertheless, past behavior doesn’t always lead to the same future results and the management of an
insurance company should analyze the macro-environment regularly and compare the results with rival’s
movements. However, macro- environmental analysis is complex and difficult, and the most important is
that it is subject to individual interpretation and abilities.
Industry life cycle is a fundamental analysis based on the different stages of an industry at a given point in
time. The analysis of the Greek insurance industry lifecycle didn’t offer insights for the competitors which
operate in the Greek market. Nevertheless, this analysis can help the process of making investment
decisions for companies that want to expand their business through merger or acquisition. Another point
that needs to be considered is that the Greek insurance life cycle stage was made through assumptions.
Changes in market conditions or consumer behavior cannot predict future movements of the industry. This
analysis cannot give a certain insight that the Greek insurance industry will grow or remain stable or even
decline and at what pace.
Porter’s five-forces model can be used to analyze the Greek insurance industry in order to find threats and
opportunities. Similar to macro-environment analysis, this model helps us to have a broader view on
300