Numerical Methods for Scientists and Engineers

Article · April 1975with 15 Reads
Abstract
From the Publisher: For this inexpensive paperback edition of a groundbreaking classic, the author has extensively rearranged, rewritten and enlarged the material. Book is unique in its emphasis on the frequency approach and its use in the solution of problems. Contents include: Fundamentals and Algorithms; Polynomial Approximation— Classical Theory; Fourier Approximation—Modern Therory; Exponential Approximation.

Do you want to read the rest of this article?

Request full-text
Request Full-text Paper PDF
  • ... These alternatives and others are considered in [5,7], and treated in differential equations at an undergraduate * All correspondence should be directed to: https://www.researchgate.net/profile/Edgardo Gerck level in Chapters 10 and 11 [8]. ...
    ... This derivation follows standard procedure in [7]. Fur- thermore, for Eq. ...
    ... Another question was anonymous, and the last two questions were taken from the previous works, I and II. These questions are also useful to study the Matrix-Variational Method (MVM), in physics [2-4, 6, 912], and better formulate it in a mathematical sense [5,7,8,13]. ...
    Preprint
    Full-text available
    We consider the Sturm-Liouville Eigenvalue (SLE) problem, and also the inverse SLE (iSLE) problem. This work presents a mathematical model for the Matrix-Variational Method (MVM), to solve both SLE problems, in physics, due to Gerck et. al., from 1979. We show an intuitive model, with fitting suggestions that can be used for teaching physics, including energy levels and wave functions in quantum mechanics. The motivation is that there must be a non-zero solution valid for a given null boundary condition at infinity, which is common in physics.
  • ... This is because the complex conjugation is not an analytic operation: z cannot be expanded in power series of z for z ∈ C. However, one can resort to complex conjugation (in the sense of Ref. [43]) to relate them for complex ...
    ... In other words, g (η) can be looked upon as the real part of either h + (η) or h − (η) at positive energy, since they are complex conjugated to each other [43]. In addition, the imaginary part of h + (η) at E > 0 is nothing but the Coulomb factor π/(e 2ηπ − 1), as it appears in Eq. (43). ...
    Article
    Full-text available
    Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Phys. Rev. C 96 , 034601 (2017)] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973)] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton 1S0 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.
  • ... Today, it is possible to retrieve nearly exact values of such special functions, their derivatives and zeros by entering very simple commands to the conventional mathematical and engineering software such as Mathematica, Maple, etc., whose principles are based on such improved approximation techniques [24][25][26]. Here we suggest a very fast and accurate numerical method based on the conventional Newton-Raphson (N-R) method given in [27][28][29] to find the zeros of the Bessel functions of the first two kinds and their derivatives in a desired domain. Our algorithm involves scanning these functions in the given domain with the given number of domain divisions (which also implies the iteration number for scanning the radius domain) and finding their zeros for each division by the numerical N-R method. ...
    ... In numerical analysis, Newton's method (also known as the Newton-Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real-valued functions: (16) The Newton-Raphson method in one variable is implemented as follows [27][28][29]: ...
  • ... (3.45) mit stückweise definierten polynomialen Koeffizienten betrachtet. Da bei Polynomen r-ten Grades die (r + 1)-te Rückwärtsdifferenz r+1 ϕ l mit dem Rückwärtsdifferenz- operator := 1 − δ verschwindet (Hamming, 1986;Graham et al., 1994), kann die Faltungssumme in (3.45) in die äquivalente Rekursion ...
    ... Wiederholt man dies p-mal, so erhält man sukzessive Hamming, 1986;Graham et al., 1994), so dass für p = r + 1 die zweite Summe in (B.76) verschwindet: ...
    Thesis
    Full-text available
    Algebraic derivative estimators are linear time-invariant filters for approximating numerical derivatives of measured signals in real-time. Since they are robust to measurement noise, algebraic derivative estimators may simplify a wide variety of practical control engineering tasks, as this thesis demonstrates through numerous examples and an extensive experimental case study. The selection of favorable filter parameters is a key challenge in the application of these estimators. To this end, parameter selection criteria are derived based on approximation theory fundamentals and filter performance in the frequency and time domains. As efficient real-time implementation of these methods is of great practical interest, various techniques to reduce estimation delay and computational effort are discussed.
  • ... Another question was anonymous, and the last two ques- tions were taken from the previous works, I and II. These questions are also useful to study the Matrix-Variational Method (MVM), in physics [1][2][3][4][5][6][7][8], and better formulate it in a mathematical sense [9][10][11][12]. ...
    ... In this solution class for the STE, we can include the shooting method and matrix diagonalization tech- niques [10,12]. It is meaningful to do so because the work involved in either method increases linearly with the number of points used, important for the large matrices usually employed in discretizing the STE. ...
    Preprint
    Full-text available
    This work highlights the MVM, in terms of differences with conventional methods. In the MVM, a very sparse matrix is used to represent a complex problem. This is done through the expansion in a few, suitable base functions, depending on only one parameter. The matrix is of the order of 5x5, even 3x3, trivially diagonalized, and can be evaluated even mentally, or with handheld calculators. This is important in scaling laws and parametrization studies, where a large number of attempts have to be made, and provide an intuitive view.
  • ... Bale argues that ' [m]odellers need to engage with their beneficiaries from the outset so that models are properly scoped and fit for purpose' ( [72], p.157). Most notably, this is important as models are made for obtaining insight, not for generating numbers [73]. ...
    ... The problem of how results are communicated is a recurring point in literature. Communication of energy system modelling results fails, when recipients only see concrete numbers (e.g. total energy system cost) as an outcome, though models should primarily be seen as a tool for understanding mechanisms and getting insights [70,73]. Strachan et al. [77] proposed approaches to rein- vent the modeller/policy interface for overcoming this problem. ...
    Article
    Full-text available
    Background: The research field of energy system analysis is faced with the challenge of increasingly complex systems and their sustainable transition. The challenges are not only on a technical level but also connected to societal aspects. Energy system modelling plays a decisive role in this field, and model properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks in a transparent and structured way regarding their suitability to tackle energy system modelling challenges. Methods: Current main challenges and framework properties that potentially contribute to tackle these challenges are derived from a literature review. The resulting contribution matrix and the described application procedure is then applied exemplarily in a case study in which the properties of the Open Energy Modelling Framework are checked for suitability to each challenge. Results: We identified complexity (1), scientific standards (2), utilisation (3), interdisciplinary modelling (4), and uncertainty (5) as the main challenges. We suggest three major property categories of frameworks with regard to their capability to tackle the challenges: open-source philosophy (1), collaborative modelling (2), and structural properties (3). General findings of the detailed mapping of challenges and properties are that an open-source approach is a pre-condition for complying with scientific standards and that approaches to tackle the challenges complexity and uncertainty counteract each other. More research in the field of complexity reduction within energy system models is needed. Furthermore, while framework properties can support to address problems of result communication and interdisciplinary modelling, an important part can only be addressed by communication and organisational structures, thus, on a behavioural and social level. Conclusions: We conclude that the relevance of energy system analysis tools needs to be reviewed critically. Their suitability for tackling the identified challenges deserves to be emphasised. The approach presented here is one contribution to improve current evaluation methods by adding this aspect.
  • ... Their description can be found in many treatises on numerical analy- sis, for example, in Refs. [19][20][21][22][23]. What is important here is to understand that for the effective use of differentiation in luminescence thermometry, as in any quantitative application, it is imperative to use sufficient smoothing along with differentiation. ...
    ... Of course, many other integration methods are also available, for details see Refs. [19][20][21][22][23]. ...
    Chapter
    This chapter aims to introduce readers to the basic concepts of numerical analysis methods which are often needed for processing luminescence thermometry data. It deals with baseline offset and noise, and the ways in which they can be removed or compensated for when using experimental data. A brief outline on numerical differentiation and integration is given. Numerical quantification of fundamental spectral features is the presented, including peak detection and quantification of peak height, width, and area. A method for numerical resolution enhancement is exemplified for the case of small peaks superimposed on one dominant peak. The chapter also deals with the methods used for the calculation of excited state lifetime from data obtained from either time-resolved or phase-modulated techniques. Finally, the chapter ends by considering the figures of merit of luminescence thermometers.
  • ... These alternatives and others are considered in [3,5], and treated in MATH 055 in Chapters 10 and 11 [6]. Therefore, the reader will refer to those works, and the exposition will focus on the MVM. ...
    ... This derivation follows standard procedure in [5]. Fur- thermore, for Eq. ...
    Preprint
    Full-text available
    This finalizes the Mathematical Model using the matrix-variational solution (1979-82 method used by Gerck et. al. in physics), in support of the Intuitive Model, and with suggestions for teaching physics, including quantum mechanics, motivating that there must be a solution valid for the given null boundary condition at Infinity (common in physics).
  • ... This derivation follows standard procedure in [10]. Furthermore, for Eq. ...
    ... α n By applying the Rayleigh-Ritz stationary condition [9,10], the expression in the MVM gives, thus, the optimum value for the -parameter that makes the set used the best possible piecewise approximation, α within the given exponential set U, to the eigenfunctions of Eq. (1), the problem we are set to solve. ...
    Preprint
    Full-text available
    This is part one of four, discussing the Matrix Variational Method (MVM), for solving the Sturm-Liouville differential equation. In this first part, we include a preamble, with the Introduction, Description, Intuitive Model, Mathematical Model, Alternatives for Solution, as well as References.
  • ... Such plethora of algorithms come with numerical analysis techniques and tools indicating different factors such as convergence guarantee and rate, stability, error bounding, etc. For an introduction to the subject, see [3,11]. The third step down the hierarchy of rigorous computation is providing the necessary data structures for the proper handling, both from the computability and complexity perspectives, of numerical computations. ...
    Article
    Full-text available
    Cyber physical systems CPSs embodies the conception as well as the implementation of the integration of the state-of-art technologies in sensing, communication, computing, and control. Such systems incorporate new trends such as cloud computing, mobile computing, mobile sensing, new modes of communications, wearables, etc. In this article we give an exposition of the architecture of a typical CPS system and the prospects of such systems in the development of the modern world. We illustrate the three major challenges faced by a CPS system: the need for rigorous numerical computation, the limitation of the current wireless communication bandwidth, and the computation/storage limitation by mobility and energy consumption. We address each one of these exposing the current techniques devised to solve each one of them.
  • ... From the Lagrange interpolation formula [34,60] it follows that this formula is given by ...
    Thesis
    Full-text available
    In this thesis, we develop a general framework for local Fourier analysis of multigrid methods that is versatile and well suited for computer implementation. Using this framework we are able to analyze multigrid methods which have not been considered to this point. We analyze a multigrid method for a diffusion problem with jumping coefficients, and we analyze various block smoothers. We show how to create a flexible software for the automation of local Fourier analysis. This flexibility is achieved by choosing approximations to Fourier matrix symbols as primitive components that are then combined into complicated expressions. In this way, many problems can be described and then analyzed by the software.
  • ... Очевидно, что точка пересечения таких графиков пре- тендует на оптимальность в эвристическом смысле. С другой стороны, формально чис- ленный поиск точки пересечения двух плавно изменяющихся функций осуществляется каким-либо из известных итерационных численных методов решения систем нелинейных уравнений [7]. ...
  • ... Other groups have also worked on pressure-volume relationships like Salazar and Knowles [10] who used mono-exponential equa- tions. Niewoehner et al. [11] considered the logarithmic relationship, while Hamming's model was based on polynomial equations [12]. Ferreira et al. [13], tried to assess disease severity and prognosis in spontaneously breathing patients with idiopathic pulmonary fibro- sis by modeling respiratory pressure-volume curves as exponential equations and used sigmoidal equations for patients with no idiopathic pulmonary fibrosis symptom. ...
    Article
    Full-text available
    Applied pressure on human lung wall has great importance on setting up protective ventilatory strategies, therefore, estimating pressure relationships in terms of specific parameters would provide invaluable information specifically during mechanical ventilation (MV). A three-dimensional model from a healthy human lung MRI is analyzed by computational fluid dynamic (CFD), and results for pressure are curve fitted to estimate relationships that associate pressure to breathing time, cross section and generation numbers of intended locations. Among all possible functions, it is observed that exponential and polynomial pressure functions present most accurate results for normal breathing (NB) and MV, respectively. For validation, pressure-location curves from CFD and results from this study are compared and good correlations are found. Also, estimated pressure values are used to calculate pressure drop and airway resistance to the induced air into the lung bifurcations. It is concluded that maximum pressure drop appeared in generation number 2 and medium sized airways show higher resistance to air flow and that resistance decreased as cross sectional area increased through the model. Results from this study are in good agreement with previous studies and provide potentials for further studies on influence of air pressure on human lung tissue and reducing lung injuries during MV.
  • ... (see Hamming (1973) and Smyth (1998) for a detailed description of these polynomials). Bierens (1997) and Tomasevic and Stanivuk (2009) argue that it is possible to approximate highly nonlinear trends with rather low degree polynomials. ...
    Article
    Full-text available
    In this paper we have examined the unemployment rate series in Turkey by using long memory models and in particular employing fractionally integrated techniques. Our results suggest that unemployment in Turkey is highly persistent, with orders of integration equal to or higher than 1 in the majority of the cases. This implies lack of mean reversion and persistence of the shocks. We found evidence in favor of mean reversion in the case of female unemployment and this happens for all the groups of non-agricultural, rural, urban and youth unemployment series. The possibility of non-linearities are observed only in the case of female unemployment and the degree of persistence is higher in the cases of female and youth unemployment series. Important policy implications emerge from our empirical results. Thus, for example, positive shocks reducing unemployment will have permanent effects being good for the economy, but negative shocks increasing unemployment will also have permanent effects and strong measures should then be adopted to reduce it. Labor and macroeconomic policies will most likely have long lasting effects on the unemployment rates.
  • ... The ESPRIT algorithm used in this work follows the flowchart diagram shown in Fig. 7 [12], [13], [14]. The mathematical model assumed for signal can be expressed as [10], [15]: ...
  • ... The least square regression method is generally adopted to fit a straight line or a curve to a set of data points (Hoffman and Frankel, 2001;Hamming, 2012;Stroud, and Booth, 2013). A second order polynomial of the form y = a + bx + cx 2 has been chosen to fit the measured path loss data. ...
    Article
    Full-text available
    Propagation measurements and modeling provide useful information for signal strength prediction and the design of transmitters and receivers for wireless communication systems. In order to deploy efficient wireless communication systems, path loss models are indispensable for effective mobile network planning and optimisation. This paper presents propagation models suitable for path loss prediction of a fourth generation long-term evolution (4G LTE) network in the suburban and urban areas of Lagos, Nigeria. The reference signal received power (RSRP) of a 4G LTE network was measured at an operating frequency of 3.4GHz, and measured data was compared against existing pathloss models. Among the candidate models, the COST 231-Hata and the Ericsson models showed the best performances in the urban and suburban areas with root mean squared errors (RMSEs) of 5.13dB and 7.08dB, respectively. These models were selected and developed using the least square regression algorithm. The developed models showed good prediction results with RMSEs of 6.20dB and 5.90dB in the urban and suburban areas, respectively, and compare favourably with propagation measurement results reported for similar areas. It was found that these models would better characterise radio coverage and mobile network planning, enhancing the quality of mobile services in related areas.
  • ... Although a very eminent engineer and mathematician has written that "the purpose of computing is insight, not numbers", the reality of engineering is that we need numbers. (Hamming, 1973) The approach discussed in this article proposes to produce numbers that indicate direction, meaning: is the direction going with nature or against nature? Thus, in a process involving mass transfer, is it more important to know, first, exactly how much mass appears at the output, or is it not more important to know in the planning and design phase, before proceeding any further out in the field, what the true mass balance is, i.e., how much it actually cost the environment per unit of input to obtain a certain output? ...
    Article
    Full-text available
    FOR CITATION: El-Atrash, A.A., Salem, H.S., and Isaac, J.E., 2009. Planning towards sustainable land transportation system in a future Palestinian State. Journal of Natural Science and Sustainable Technology, 2 (3):305–324. ABSTRACT: The paper at hand briefly touches the geopolitical status that formed the structure of the current Palestinian land transportation system and discusses the Palestinian transportation planning dilemmas, both presently and in the future, that must be addressed by Palestinian physical planners, as a prerequisite for sound control and management of land use and natural resources' utilization. The paper discusses the unilateral Israeli plans and actions that have been developed in relation to the land transportation system in the Occupied Palestinian Territory (OPT; the West Bank, including East Jerusalem, and the Gaza Strip) during the last 40 years. It also discusses the legitimacy of these plans and actions by the virtue of the International Humanitarian Law and the United Nations' Resolutions. Also, the paper presents International plans and studies dedicated to the creation of a more sustainable land-use policy for a future Palestinian State. Keywords: Geographical contiguity, transport planning dilemmas, bypass roads, terminals, Segregation Wall, dual (Israeli–Palestinian) transportation system, Israeli settlements, safe passages, Gaza Strip, West Bank, East Jerusalem.
  • ... One such case is when the field variable \l/(x, y) is periodic in one or more coordinate directions. Assuming the test function to be periodic in the x-direction, an appropriate choice of the test functions in the x-direction would be [20]: ...
    Conference Paper
    Full-text available
    Many research has been conducted on wear mechanisms and tribology behavior of materials and components such as gear, bearings, etc. Compliant journal bearings popularly known as foil bearings have gained significant attention in recent years because of their unique mode of operation and diversity of applications. These types of bearings have various advantages compared to the conventional rigid journal bearings in terms of higher load carrying capacity, lower power loss, better stability and greater endurance. These bearings are self-acting, and can operate with ambient air or any process gas as the lubricating fluid. The need for complex lubrication systems is eliminated, which result in significant weight reduction and lower maintenance. Air as a lubricant is available abundantly and can operate at elevated temperatures whereas conventional oil-based lubricants fail since their viscosity drops exponentially with the rise in temperature.
  • ... In the proof of Theorem 3.5 we use the following relations [6] ...
    Article
    The problem of construction of optimal quadrature formulas in the sense of Sard in the space L(m)2 (0, 1) is considered in the paper. The quadrature sum consists of values of the integrand at internal nodes and values of the first, third and fifth derivatives of the integrand at the end points of the integration interval. The coefficients of optimal quadrature formulas are found and the norm of the optimal error functional is calculated for arbitrary natural number N and for any m ≥ 6 using Sobolev method. It is based on discrete analogue of the differential operator d2m/dx2m. In particular, for m = 6, 7 optimality of the classical Euler-Maclaurin quadrature formula is obtained. Starting from m = 8 new optimal quadrature formulas are obtained.
  • ... The fitting process is based on a non-linear min- imization of the total sum of absolute residues. Differently of the traditional minimum least square method, the minimum absolute residues decrease the impact of possible outliers due to the weightiness of the square of the errors (Hamming 1973). The process considers the spatial position of field measurements and relevant ge- ometries in order to find the best fit model pa- rameters. ...
    Conference Paper
    A simple engineering model based on near-field energy distribution approach is proposed to account for the potential damage in wall control blasting. The critical damage limit is assumed to be the strain energy associated with the internal elastic deformation energy of the material. Based on this criterion, a relative potential damage index is established to assess blast damage in rock masses. In addition, some further assumptions are taken regarding to the phenomena, the proposed model relates important parameters of the problem such as impedances, charge geometry, explosive energy and elastic properties of the material to estimate the peak particle velocities (PPV) around of the blasthole, which is easily measurable in near-field vibration campaigns in order to fit the attenuation factor.
  • ... The least square regression method is generally adopted to fit a straight line or a curve to a set of data points (Hoffman and Frankel, 2001;Hamming, 2012;Stroud, and Booth, 2013). A second order polynomial of the form y = a + bx + cx 2 has been chosen to fit the measured path loss data. ...
    Article
    Full-text available
    Propagation measurements and modeling provide useful information for signal strength prediction and the design of transmitters and receivers for wireless communication systems. In order to deploy efficient wireless communication systems, path loss models are indispensable for effective mobile network planning and optimisation. This paper presents propagation models suitable for path loss prediction of a fourth generation long-term evolution (4G LTE) network in the suburban and urban areas of Lagos, Nigeria. The reference signal received power (RSRP) of a 4G LTE network was measured at an operating frequency of 3.4GHz, and measured data was compared against existing pathloss models. Among the candidate models, the COST 231-Hata and the Ericsson models showed the best performances in the urban and suburban areas with root mean squared errors (RMSEs) of 5.13dB and 7.08dB, respectively. These models were selected and developed using the least square regression algorithm. The developed models showed good prediction results with RMSEs of 6.20dB and 5.90dB in the urban and suburban areas, respectively, and compare favourably with propagation measurement results reported for similar areas. It was found that these models would better characterise radio coverage and mobile network planning, enhancing the quality of mobile services in related areas.
  • Preprint
    The conventional mathematical methods are based characteristic scales, while urban form has no characteristic scale in many aspects. Urban area is a measure of scale dependence, which indicates the scale-free distribution of urban patterns. In this case, the urban description based on characteristic scales should be replaced by urban characterization based on scaling. Fractal geometry is one of powerful tools for scaling analysis of cities, thus the concept of fractal cities emerged. However, how to understand city fractals is a still pending question. By means of logic deduction and ideas from fractal theory, this paper is devoted to discussing fractals and fractal dimensions of urban form. The main points of this work are as follows. First, urban form can be treated as pre-fractals rather than real fractals, and fractal properties of cities are only valid within certain scaling ranges. Second, the topological dimension of city fractals based on urban area is 0, thus the minimum fractal dimension value of fractal cities is equal to or greater than 0. Third, fractal dimension of urban form is used to substitute urban area, and it is better to define city fractals in a 2-dimensional embedding space, thus the maximum fractal dimension value of urban form is 2. A conclusion can be reached that urban form can be treated as fractals within certain ranges of scales and fractal geometry can be applied to the spatial analysis of the scale-free aspects of urban morphology. Based on fractal dimension, topological dimension, and embedding space dimension, a set of fractal indexes can be constructed to characterize urban form and growth.
  • Article
    An algorithm is presented for the CAD-free conversion of linear unstructured meshes into curved high-order meshes, which are necessary for high-order flow simulations. The algorithm operates via three steps: (1) autonomous detection of feature curves along the mesh surface, (2) reconstruction of the surface curvature from the combination of surface node positions and feature curve positions, and (3) alignment of the mesh interior to the newly curved surface. The algorithm is implemented in our freely available cross-platform graphical software program meshCurve, which transforms existing linear meshes into high-order curved meshes
  • The article presents the Hybrid Real-Time Dispatcher Training Simulator (HRTDTS). The main advantages of this simulator are the adequate simulation of a single spectrum of processes in the object for training – any real electric power system (EPS), as well as the real-time control of the equipment circuit-mode states in modes of the EPS operation. This is achieved by the HRTDTS development within the concept of hybrid simulation, which allows to surpass the modern widely-used simulators based purely on numerical simulation, which do not comprehensively reproduce processes in the EPS and limit the training of certain dispatchers skills. The dispatcher simulator structure and the developed specialized HRTDTS software are demonstrated. The developed dynamic dashboards for monitoring and operating, implementing the circuit-mode states visualization of the equipment or the EPS districts that correspond to specific objects for training, are presented. The special panels that display extended information about the processes in the EPS are also shown in the article. The comparison of simulation results with the data, obtained via RTDS, for the HRTDTS validation was carried out. The practical emergency scenario for the training of dispatching personnel, including the assignment of an emergency situation and the actions of dispatchers to eliminate it, was created for testing and demonstration of the HRTDTS capabilities.
  • Article
    Full-text available
    By applying the Sherman–Morrison–Woodbury (SMW) formula and a discrete cosine transformation matrix, De Jong and Sakarya [De Jong, R. M., and N. Sakarya. 2016. “The Econometrics of the Hodrick–Prescott Filter.” Review of Economics and Statistics 98 (2): 310–317] recently derived an explicit formula for the smoother weights of the Hodrick–Prescott filter. More recently, by applying the SMW formula and the spectral decomposition of a symmetric tridiagonal Toeplitz matrix, Cornea-Madeira [Cornea-Madeira, A. 2017. “The Explicit Formula for the Hodrick–Prescott Filter in Finite Sample.” Review of Economics and Statistics 99: 314–318] provided a simpler formula. This paper provides an alternative simpler formula for it and explains the reason why our approach leads to a simpler formula.
  • Article
    Full-text available
    Technologies for the analysis of time series with gaps are considered. Some algorithms of signal extraction (purification) and evaluation of its characteristics, such as rhythmic components, are discussed for series with gaps. Examples are given for the analysis of data obtained during long-term observations at the Garm geophysical test site and in other regions. The technical solutions used in the WinABD software are considered to most efficiently arrange the operation of relevant algorithms in the presence of observational defects.
  • Article
    In this study, to investigate the performance characteristics of vapor injection refrigeration system with an economizer at an intermediate pressure, the vapor injection refrigeration system was analyzed under various experiment conditions. As a result, the optimum design data of the vapor injection refrigeration system with an economizer were obtained. The findings from this study can be summarized as follows. The mass flow rate through the compressor increases with intermediate pressure. The compression power input showed an increasing trend under all the test conditions. The evaporation capacity increased and then decreased at the intermediate pressure, and as such, it became maximum at the given intermediate pressure. The increased mass flow rate of the by-passed refrigerant enhanced the evaporation capacity at the low medium pressure range, but the increased saturation temperature limited the subcooling degree of the liquid refrigerant after the application of the economizer when the intermediate pressure kept rising, and degenerated the evaporation capacity. The coefficient of performance (COP) increased and then decreased with respect to the intermediate pressures under all the experiment conditions. Nevertheless, there was an optimum intermediate pressure for the maximum COP under each experiment condition. Therefore, the optimum intermediate pressure in this study was found at −99.08 kPa, which is the theoretical standard medium pressure under all the test conditions.
  • Article
    This study examines inflation over one century of data for 29 countries based on fractional integration incorporating nonlinearities to account for structural breaks and asymmetry in the process of inflation. The results suggest that the degree of persistence is that, while there is evidence of long-memory behavior in the inflation rates of 17 countries, barring Russia, none of the remaining 28 countries indicate evidence of unit roots. The result implies that monetary authorities in these countries can play a role in controlling inflation, though the extent of intervention required will tend to vary, with the strongest being in Russia.
  • Article
    The “real time” formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.
  • Article
    In the frame of the constantly increasing in electric power systems (EPSs) complexity, the challenge of ensuring adequacy of relay protection (RP) devices operation becomes more and more urgent. The authors propose to use the detailed mathematical models of the combination «instrument transformers (IT)—RP» with modern EPS simulators as the solution. It is very important to adequately simulate IT, in particular, the magnetic core magnetization process, because IT largely determines the shape of the RP-controlled signal and affects on its operation. However, in practice simplified models are currently used due to the lack of an accurate mathematical description of the IT core magnetization characteristics. Such models do not reflect all processes in the magnetic core. The aim of this work is to develop a hysteresis mathematical model, which will have a high accuracy in reproducing the magnetization processes of transformer core. The main research method is mathematical modeling of the ferromagnetic material magnetization processes in the MathCAD software. The article presents main development principles of the mathematical model with magnetic hysteresis memory based on the Preisach theory, which reproduces with high accuracy both the major and minor hysteresis loops.
  • Article
    We consider two important features of the historical US price data (1774–2015), namely the data’s persistence and cyclical structure. We first consider the persistence of the series and focus on standard long-memory models that incorporate a peak at the zero frequency. We examine different models with respect to the deterministic terms, including nonlinear deterministic trends of the Chebyshev form. Then, we investigate a more general model that includes both persistence and cyclicality of the series and, thus, includes two fractional integration parameters, one at the zero (long-run) frequency and the other at the nonzero (cyclical) frequency. We model the cyclical structure as a Gegenbauer process. This specification outperforms the standard long-memory specifications. We find that the order of integration at the zero frequency is about 0.5, and the one at the cyclical frequency is about 0.2 with cycles repeating approximately every 6 years, producing mean-reverting long-memory effects at both the zero and cyclical frequencies. Fitting the values to this model, however, we discover the presence of a break that, according to the methods employed, takes place at around 1940–1941. The results indicate the prevalence of the long-run or zero component with a much higher degree of persistence during the second post-1940–1941 subsample, suggesting important implications for monetary policy.
  • Article
    The quality of query execution plans in database systems determines how fast a query can be executed. It has been shown that conventional query optimization still selects sub-optimal or even bad execution plans, due to errors in the cardinality estimation. Although cardinality estimation errors are an evident problem, they are in general not considered in the selection of query execution plans. In this paper, we present three novel metrics for the robustness of relational query execution plans w.r.t. cardinality estimation errors. We also present a novel plan selection strategy that takes both, estimated cost and estimated robustness into account, when choosing a plan for execution. Finally, we share the results of our experimental comparison between robust and conventional plan selection on real world and synthetic benchmarks, showing a speedup of at most factor 3.49.
  • Chapter
    The paper presents a solution for coupling two shafts with non-coplanar axes using an intermediary element. The intermediary part is connected with the ground via a spherical pair while, to the shafts, two point-surface contacts are used. For the proposed model, the positional analysis, a CAD model and numerical validation of the solution are presented. The advantage of the tetrapod coupling consists in constructive simplicity and controlled reliability by the possibility of replacing the higher pairs of the intermediary element with linkages with lower pairs. © 2018, Springer International Publishing AG, part of Springer Nature.
  • Article
    This paper proposes a representation of the family of Minkowski distances using fuzzy sets. The proposed method helps to represent human-like perceptions about distances, which can help decision making in presence of non-probabilistic uncertainties such as imprecision and ambiguity. This way we propose to define a fuzzy set regarding the concept of closeness of two elements/sets measured by a Minkowski metric. Two application examples are presented, solved, and compared to some classical approaches. Finally some concluding remarks are provided and some interpretation issues are explained.
  • Article
    Learning methods in engineering education should evolve to take advantage of the progress of technology. Despite being one of the most important theorems of engineering, at undergraduate level the foundations of sampling theorem are usually oversimplified, whereas at graduate level its proof is frequently presented in the context of a rigorous mathematical framework. This paper presents an interactive approach to illustrate an amenable proof of the sampling theorem through example. Apart from its practical value as a learning tool, the proof captures some mathematical subtleties that can help to understand some more advanced concepts. The goal of this work is to show that learning the sampling theorem through an interactive example of its mathematical proof can be an enjoyable and insightful experience.
  • Article
    Cr(III) is an essential micronutrient for the proper function of human being, while Cr(VI) is a carcinogenic chemical, which has been one of the hazardous air pollutants defined by US Environmental Protection Agency (US EPA) in 2004. Accurate measurements of atmospheric hexavalent chromium concentration are required to evaluate its toxicity. In the present study, a simulation tool using MATLAB program was developed to evaluate soluble and insoluble chromium species formed during the Cr(VI) field sampling (500 ml, 0.12 M HCO3⁻ buffer, pH = 9, 24 h, cellulose filter) which will assist us to better quantify the hexavalent chromium concentration. In this study, Cr(VI) was found to be dominant in soluble form as CrO4²⁻ and in precipitated form as (NH4)2CrO4, CaCrO3, BaCrO4, and PbCrO4 at pH = 9 cellulose filter. Secondly, reduction of Cr(VI) to Cr(III) was higher than the oxidation of Cr(III) to Cr(VI). Basic pH solutions retard the conversion of Cr(VI) in the presence of Fe(II) and As(III) and facilitate the precipitation of Cr(III). The presence of the NaHCO3 as buffer on the cellulose filters and also in the filter extraction solution may add to the precipitation of Cr(VI) as NaCrO4. This study provides new insights to improve cellulose sampling filters, and the filter extraction solutions to either prevent Cr(VI) precipitation during the wet analysis of Cr(VI) or improve the Cr(VI) analysis methods to quantify total Cr(VI) (soluble and insoluble Cr(VI)).
  • Chapter
    The melting stages of Chap. 3 describe the behavior of the meltwater during different time periods of the defrost process: absorption and permeation, accumulation, and draining. Meltwater accumulation reduces the strength of ice adhesion and promotes the possibility of slumping. The rate of meltwater draining depends on the boundary conditions at the interfaces of the liquid layer. A large draining rate reduces defrost time and improves defrost efficiency. Solution methods of the governing transport equations of these stages of defrost are described in this chapter.
  • Preprint
    Full-text available
    Correction of the closed orbit in the cooler-synchrotron COSY at FZ-Julich.
  • Article
    Full-text available
  • Article
    Full-text available
    Whereas for sliding friction the Amonton-Coulomb law clearly states the proportionality between the friction force and the normal force, the rolling friction torque and normal force dependency is assumed linear in some references and nonlinear in others. The coefficient of rolling friction can be obtained using various pendula, based on the assumption of linear dependence between the rolling friction torque and normal force. The theoretical models lead to a linear decrease of angular amplitude (confirmed experimentally for the dry sliding friction case) while the experimental damping of amplitude is nonlinear for any of the employed pendula. The basis for finding the rolling friction coefficient is the equality between the theoretical and experimental slopes of the decreasing angular amplitude of the pendulum. Due to this nonlinearity, the value of the coefficient of rolling friction depends on the launching amplitude of the pendulum. In the present paper the equation of motion of the evolvent pendulum is obtained based on the hypothesis that the dependency of resistant torque on normal force is a power law. Experimental tests performed with the new heavy pendulum have confirmed a nonlinear, exponential attenuation of angular amplitude. Thus, finding the coefficient of rolling friction for the least favourable situation was aimed. This happens when the launching amplitude of the pendulum has the highest possible value and occurs when the ratio between the tangential force and the normal force equals the coefficient of sliding friction. A technique is proposed for obtaining the critical value of the coefficient of rolling friction.
  • Article
    In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes of a stationary and Gaussian nature. The non-Gaussian nature of the processes is expressed in particular by very significant flattening levels that can affect the fatigue design of mechanical structures, conventionally acquired by spectral approaches, based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for the characterization of random excitation processes generated by this type of driving situation need to be developed, by proposing innovative characterization methods no longer based on deterministic spectral and/or temporal approaches but on temporal approaches of a stochastic nature. Indeed, to characterize the fatigue damage produced by non-stationary and non-Gaussian random processes, the author shows that it is now necessary to mix time-counting techniques used in the field of vibration fatigue with those of the sampling statistics used in estimation theory. This approach makes it possible to extrapolate favorably over time the level of damage to structures, from a statistical perspective, when this extrapolation phase is in practice carried out deterministically. This technique, referred to as the disjoint block method (BDM), has been tested successfully in the context of component specification techniques from the reliability standpoint since 2010, and just recently integrated AFNOR standards.
  • Article
    The Ryukyu Archipelago is a chain of islands situated on the Ryukyu Arc that consists of six major islands and 55 smaller islands of various size. Compared to other parts of Japan, the documented seismic history of Ryukyu Archipelago is not well known. However, massive tsunami boulders can be found in the area, which definitely imply that mega-earthquakes have occurred in the vicinity of the Ryukyu Archipelago. Here we describe a number of massive tsunami boulders and our estimation of the magnitude of the respective potential causative mega-earthquake. We also compared our results with those from other independent studies. Taken together, all results lead us to conclude that a mega-earthquake with a magnitude of > 9 is likely to occur in the vicinity of Ryukyu Archipelago. The moment magnitudes of mega-earthquakes estimated from the tsunami boulders range between 8.6 and 9.7. These results are consistent with estimations from the segmentation approach and from the Japan Nuclear Regulation Authority. Therefore, the disaster-prevention measures for Ryukyu Archipelago should take the findings from this study into account and urgently begin to implement counter-measures against the possible damaging effects of mega-earthquakes.
  • Article
    Full-text available
    This paper investigates the behavior of the inflation rate in Iran for the time period 1992–2017 using fractional integration. The results indicate an extremely large degree of persistence in the series, with an order of integration of about 2. The consequences of such a degree of dependence are examined in the paper along with some suggestions to reduce it in the future.
This research doesn't cite any other publications.