• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      Distributed and recursive blind channel identification to sensor networks

      2017-12-22 06:12:00RuiLIUHanFuCHEN
      Control Theory and Technology 2017年4期

      Rui LIU,Han-Fu CHEN

      Key Laboratory of Systems and Control,Academy of Mathematics and Systems Science,Chinese Academy of Sciences,Beijing 100190,China

      Distributed and recursive blind channel identification to sensor networks

      Rui LIU?,Han-Fu CHEN

      Key Laboratory of Systems and Control,Academy of Mathematics and Systems Science,Chinese Academy of Sciences,Beijing 100190,China

      In this paper,the distributed and recursive blind channel identification algorithms are proposed for single-input multi-output(SIMO)systems of sensor networks(both time-invariant and time-varying networks).At any time,each agent updates its estimate using the local observation and the information derived from its neighboring agents.The algorithms are based on the truncated stochastic approximation and their convergence is proved.A simulation example is presented and the computation results are shown to be consistent with theoretical analysis.

      Blind channel identification,distributed and recursive algorithm,truncated stochastic approximation,sensor networks

      1 Introduction

      Because of its potential application in wireless communication and other areas,blind channel identification has attracted great research interest in signal processing and communication(see,e.g.,[1,2]),and many estimation algorithms have been proposed(see,e.g.,[3–11]).Most results published so far are concerned with the centralized algorithms,i.e.,the estimation for channel coefficients is carried out after having output data of all channels been collected.In contrast to this,motivated by the emergence of large-scale and inexpensive sensor networks,the distributed algorithms are proposed in the recent papers[10,11],where the estimate for channel coefficients is updated by using the local observation and the information derived from neighboring agents.As discussed in[12],compared with the centralized approach,the distributed approach is more robust,better to protect privacy,easier to be extended,less complicated in computation,and less expansive in communication.

      For system parameter estimation,there usually are two approaches:block algorithms and on-line recursive algorithms.This is also the case for blind channel identification.For the block algorithms(see,e.g.,[3,5,6]),a block of data samples is collected first and then processed together to obtain the estimate.It is clear that the block algorithms require huge storage space and endure heavy instantaneous computational burden.Several on-line recursive algorithms have been proposed,such as adaptive least squares smoothing algorithm[7]and stochastic approximation based algorithm[8,9].

      In this paper,we develop the distributed and recursive blind identification algorithms for SIMO systems of sensor networks for both time-invariant and time-varying networks.Compared with the existing works,the proposed approach can handle more complicated situations,for example,not only noise-free but also noisy observations;not only deterministic but also statistic input signals;not only the time-invariant but also the randomly time-varying sensor networks.The algorithms proposed in the paper are recursive and they have the obvious advantage over the block algorithms,because they are continuously improving the estimate while receiving new signals and they require less computation.

      The main thought is to design a global function,which is a sum of local functions and whose root is the parameters to be estimated.Consequently,parameter estimation for blind identification is transformed to a distributed stochastic approximation problem.To achieve this,a diagonal matrix is designed for sensor networks,and the local function for each sensor is obtained with the help of the designed matrix.Each sensor estimates the root by the observations of local functions(with or without noise)and information derived from its neighbors.

      The proposed algorithms are based on the distributed truncated stochastic approximation algorithm and convergence of the algorithms is proved for the following four cases:a)time-invariant network with finite number of noise-free observations;b)time-invariant network with finite number of noisy observations;c)timeinvariant network with infinite number of noisy observations;and d)time-varying network with infinite number of noisy observations.A simulation example is presented and it is shown that the computation results are consistent with theoretical analysis.

      This paper is organized as follows.Section2 presents the blind channel identification problem of sensor networks.In Section3,the problem is transformed to root seeking for an unknown function.Identifiability of parameters is discussed in Section4.The estimation algorithms and their convergence are presented in Section5.A simulation example is presented in Section6and some concluding remarks are given in Section7.

      The following notations will be used throughout this paper.Am×nrepresents an m × n-dimensional matrix with its elements denoted as aij(i=1,2,...,m;j=1,2,...,n).Particularly,Inrepresents an n×n dimensional identity matrix,1n×1represents an n dimensional vector with all entries equal to 1,0n×1represents an n-dimensional vector with all entries equal to 0.Rm×ndenotes the set of m × n-dimensional real matrices.?denotes the Kronecker product[13].For a given vector or matrix x,xTdenotes its transpose,?x?denotes its Euclidean norm.E(x)stands for the expectation of a random variable x,and tr(A)is the trace of a square matrix A.

      2 Problem formulation

      Consider a system consisting of N finite impulse response(FIR)channels with L being the maximum order of the channels.Let sk,k=0,1,2,...,M,be one dimensional input signal,andk=L,L+1,...,M,be the output signal,where M is the number of samples and may not be fixed;the superscript(i)denotes the output signal of the ith channel,and the subscript k is the time index.Then

      are unknown channel coefficients.Denote by

      the coefficients of the ith channel,i=1,2,...,N,and by a long vector

      the coefficients of the whole system.

      Assume the observation of the output is corrupted by noise,and the observation at time k is

      where

      is the observation noise and where

      The problem of the distributed blind channel identification is to estimate h?at any i∈ V={1,2,...,N}on the basis of its local observation and the information obtained from its neighbors.Let us denote by h(ki)the estimate for h?given by sensor i at time k.It is desired that all estimates h(i)ki=1,2,...,N as k→∞tend to the same limit equal to h?possibly with a constant multiplier.

      The information exchanging among the N sensors at time k is described by a digraph G(k)=(V,E(k),A(k)),where V={1,2,...,N}denotes the node set with node i representing sensor i;E(k)?V×V is the edge set with(j,i)∈E(k)if sensor i can obtain information from sensor j at time k by assuming(i,i)∈E(k);is the associated adjacent matrix with aij(k)>0 if and only if(j,i)∈E(k),and aij(k)=0 otherwise.Denote by Ni(k)={j∈ V|(j,i)∈E(k)}={j(1i)(k),...,j(nii)(k)}the neighbors(neighboring sensors)of sensor i at time k.

      A time-independent digraph G=(V,E,A)is called strongly connected if for any i,j∈V there exists a directed path from i to j.By this we mean a sequence of edges(i,i1),(i1,i2),...,(ip?1,j)in the digraph with distinct nodes ik∈V ?k:0≤l≤ p?1,where pis called the length of the directed path.A non-negative square matrix A is called doubly stochastic if A1=1,1TA=1T.

      3 Transforming problem to root-seeking

      In the case of the centralized channel parameter estimation,i.e.,the estimation is based on output data of all channels,it is known that the problem in question can be transformed to root-seeking of a linear regression function[8,9].Let us briefly describe the approach.

      For any i∈V,equation(1)can be written as

      with z being the backward shift operator

      From equation(3),we have

      Using the output data of the ith and jth channels,the above set of equations can be written in a matrix form

      is an(L+1)× (L+1)–dimensional matrix ?k=3L,3L+1,...,M.

      Using all the output data of the system,we have

      Therefore,to estimate h?is equivalent to find the root of(6).

      We are now in the distributed estimation situation.In a network,sensor i can only receive the information from itself and its neighboring sensors.Can the problem still be transformed to root-seeking for some regression functions?We state the answer as a theorem.

      Theorem 1The distributed channel estimation of h?can be transformed to distributed seeking root h?of the following function:

      where f(i)(h)=R(i)h is a linear function observed possibly with noise by sensor i at h(ki)at time k.

      ProofLet us first consider the case where the number of input signals is finite and the observation is free of noise.

      It is clear that whether or not the functioncan be observed by sensor i atat time k it depends on the digraph G(k).At time k,the answer is positive if and only if both sensors p and q are the neighbors of i,which in turn is equivalent to aip(k)aiq(k)>0.Based on this observation,we define the following-dimensionaldiagonal matrixconsisting of diagonal sub-matrices.Each of them is an L+1-dimensional identity matrix multiplied by a coefficient.The total number of coefficients is,which are as follows:

      where I[inequality]is an indicator function,i.e.,I[inequality]=1 if the inequality is satisfied,otherwise,I[inequality]=0.Thus,we have

      ?i=1,2,...,N,k=3L,3L+1,...,M.

      Since recursive algorithms cannot reach the true value in a finite number of steps,we need to repeatedly use the data.Set

      The sequence is periodical with period length equal to M?3L+1.

      Define

      and set

      Therefore,h?is a root of the following function:

      In the noisy observation case(2),similar to X(ki)and Φk,we define Y(ki)and Ψk,and Nk(i)and Ξk,which have the same structure as X(ki)and Φkbut with x(ki)replaced byrespectively.By(2)we have

      Thus,the corresponding observation noiseis

      In the case of infinite number of input signals,assuming that for any i∈ V,satisfies the ergodicity property:

      we then have that h?is a root of equation(9),but the observationis with noise(12).

      4 Identifiability

      When the centralized algorithms are concerned,the sufficient conditions for identifiability of the SIMO system with deterministic input signal are presented in[6],while with statistical input signal in[8].In these papers,two issues were considered:a)the conditions for channel to be identifiable and b)the conditions for inputs to be informative.Since we are planning to propose a distributed algorithm,the conditions on the sensor networks should also be taken into consideration.

      Let us list conditions to be used.

      Conditions on channels and sensors are as follows:

      In this section,we consider under which conditions equation(9)has a unique solution,or what types of sensor networks,channels,and input signals are identifiable.

      A2)h(j)(z),j=1,2,...,p given by(3)have no common factor.

      A3)The(M?2L+1)×(2L+1)-dimensional Hankel matrix(2L+1)is of full rank(=2L+1),which is formed from the input sequence{s0,s1,...,sM,M≥4L}as follows:

      For the sequence of infinite number of input data{sk}k≥0,we need the following condition:

      A4)The input{sk}k≥0is a sequence of mutually independent random variables with E?sk?2≠0.

      The conditions on the graph are as follows:

      B1)For the time-invariant sensor networksthe graph G is strongly connected and the associated adjacent matrix A is a doubly stochastic matrix.

      B2)For the time-varying sensor networks G(k),

      a)A(k),k≥1 are doubly stochastic matrices;

      b)there exists a constant 0<θ<1 such that

      where

      d)there exists a positive integer B such that

      for all(j,i)∈E∞and any k≥ 0.

      Lemma 1Assume B1,A2,and A3 hold.Thenis the unique nonzero vector simultaneously satisfying

      ProofAssume there is another solution

      Similar to h(i)(z),let us define(i)(z).From(15)it follows that

      By(3),we then have

      From here it follows that

      where by h(i,j)we denote the(2L+1)-dimensional vector composed of coefficients of the polynomial(i)(z)kh(j)(z)?(j)(z)h(i)(z)written in the form of increasing orders of z.

      For a fixed j,(16)is valid for all i∈Nj,i≠ j.Therefore,all roots of h(j)(z)should be roots of(j)(z)h(i)(z).By Assumption 3,all roots of h(j)(z)must be roots of(j)(z).Consequently,there is a constant cjsuch that(j)(z)=cjh(j)(z),?j=1,...,N.Substituting this into(16)leads to

      and hence ci=cj,?(i,j)∈ {(i,j)|i≠ j,ai,j> 0,i,j=1,2,...,N}.

      Then by B1),there exists a directed path from p to q,and hence cp=cq=c,?p,q=1,2,...,N.Thus,we conclude that

      If the input signal{sk}is a sequence of infinitely many mutually independent random variables,then we have the following lemma.

      Lemma2AssumeB1),A2),andA4)hold.Thenis the unique unit eigenvector corresponding to zero eigenvalue for the matrices

      and the rank of Bj,kis N(L+1)?1.

      ProofSince{si}is a sequence of mutually independent random variables and E|si|2≠0,it follows that

      is a(2L+1)×(2L+1)-dimensional matrix.

      Proceeding along the lines of the proof of Lemma 1,we arrive at

      This show s that Bj,kis of rank N(L+1)?1?j≥ 0,?k≥ 0,andis the unique unit eigenvector corresponding to the zero eigenvalue.

      Remark 1For the time-varying sensor networks G(k)Lemmas 1 and 2 are still valid if B1)is replaced by B2)in their formulation.

      5 Estimation algorithms and their convergence

      In Theorem 1 the blind channel identification problem is converted to seeking root of the sum function(9).The algorithms for solving this problems are based on the distributed stochastic approximation with fixed truncation given in the appendix.

      In the noise-free case,we take an initial value h3L?1≠0.For any i∈V,the estimate is generated by the following algorithm:

      In the noisy observation case,the estimate is generated by the following algorithm:

      We need the following conditions.

      C1){ηk}is a sequence of mutually independent random variables with

      where γ is given in C1).

      C3){sk}and{ηk}are mutually independent and each of them is a sequence of mutually independent random variables such that

      Before establishing theorems,let us cite some results from[12]and present them as lemmas.

      Lemma 3([12],Lemma 7)For a sequence of matrices{Ek},if

      then for any constant T>0,

      Lemma 4([12],Lemma 8)If A1)and B2)hold,then the sequence of estimates{hk}given by(18)–(21)contains at least a bounded sub-sequence{hnk}with

      Lemma 5([12],Lemma 9)Let{hnk}be a bounded sub-sequence with σ(i)nk= σnk,?i∈ V.Assume A1)and B2)hold.Then there exist c1>0,c2>0,M′0>0,T>0 such that for sufficient large k:

      Remark 2It is noteworthy that the above three lemmas are still valid if B2)is replaced by B1).

      For the time-invariant network with finite number of noise-free observations,we have:

      Theorem 2Letbe produced by(18)–(21)with an arbitrary initial valueAssume B1),A2),A3)and C2)(without(26)).Then

      ProofSince the algorithm(18)–(21)is in the same form as the distributed stochastic algorithm with fixed truncation,we use Corollary 1 given in the appendix to prove Theorem 2.For this it suffices to verify B2),D1)–D4)required by Corollary 1.Notice that B2)obviously follows from B1).

      In(18)–(21),

      Thus the observation noise is

      where

      It is sufficient to prove

      Define

      and

      There exists T∈(0,1)such that

      By Lemma 5,{hs:nk≤s≤m(nk,T)+1}is bounded for sufficient large k.Thus,there exists a positive constant c3such that

      for sufficient large k.Notice that

      Hence for sufficient large k and?s:nk≤s≤m(nk,T),there exist constants c4,c5,c6such that

      where θ is defined in B2)b).Since0<θ<1,there exists a positive constant m′such that θm′

      sufficient large k.Hence,we have

      Hence,we see

      Then,we have

      for sufficient large k and?Tk∈[0,T].

      Thus,by Lemma 3 we know

      Then,it follows that

      by Lemma 3 we have

      In summary,we see

      Thus,we conclude

      Thus,we have shown that B2),D1)–D4)hold.Then

      a)there exists a positive integer σ such that

      or in the compact form:

      By Lemma 1,we have

      Then,we see

      By conclusion d)d(〈hk〉we have

      Thus,we need only to consider

      Therefore,we have

      For the time-invariant network with finite number of noisy observations,we have:

      Theorem 3Letbe produced by(22)–(25)with an arbitrary initial valueAssume B1),A2),A3),C1),and C2)hold.Then

      ProofSet

      and then we have

      and hence

      From the proof of Theorem 2,we have

      Therefore,we need only to prove

      By C1),D(ki)is a martingale difference sequence,and by Eη2+γ< ∞ it follows that

      So far,we have shown that B2)and D1)–D4)hold.Then we have the following assertions a)–d).

      a)There exists a positive integer σ such that

      or in the compact form:

      By Lemma 1,we have

      Then,we see

      By conclusion d)d(〈hk〉we have

      Thus,we need only to consider

      Therefore,we have

      For the time-invariant network with infinite number of noisy observations,we have:

      Theorem 4Letbe produced by(22)–(25)with an arbitrary initial valueAssume A1),A2),A4),B1),C2),and C3)hold.Then

      ProofDefine

      From the proof of Theorems 2 and 3,we have

      Thus we need only to considerand to prove

      Then proceeding along the lines of the proof of Theorem 3,we complete the proof.

      For the time-varying network with infinite number of noisy observations,we have:

      Theorem 5Let{h(ki)}be produced by(22)–(25)with an arbitrary initial valueAssume A1),A2),A4),B2),C2),and C3)hold.Then

      This theorem can be proved by a treatment similar to that for Theorem 4.

      6 Simulation example

      In this section,we present a computer simulation example.We illustrate the convergence of the algorithm in a noisy time-invariant network environment.

      Let the input{sk}be a sequence of iid random variables∈ N(0,1)and observation noise{ηk}be a sequence of iid random variables∈N(0,0.05)and the initial valuesfor all agents are mutually independent and uniformly distributed over the interval[?0.2,0.2].Set the step sizeFor each channel i,the output and input are related as xk=sk? 0.7(1+i)sk?1.

      We consider a four sensor networks,and its topology(shown as Fig.1)and adjacent matrix are as follows.

      Fig.1 Topology of a four-sensor networks.

      To measure the identification performance,we define the normalized error as

      where?hkis the estimate at the kth step and h?denotes the true coefficient vector.βkis a scalar that minimizes the value of?βk?hk?h??;i.e.,

      Figs.2–5 show the identification performance of the algorithm given in Section5 with noisy observations.X-axis stands for iterations and Y-axis stands for the normalized error of each sensor.

      Fig.2 Normalized error for sensor 1.

      Fig.3 Normalized error for sensor 2.

      Fig.4 Normalized error for sensor 3.

      Fig.5 Normalized error for sensor 4.

      7 Conclusions

      In this paper,a mathematical description of the blind channel identification problem for the SIMO system of sensor networks is presented and the identifiability conditions are given as well.The distributed and recursive blind channel identification algorithms are proposed for four different situations.In these algorithms,estimates are updated every time when the local observation and information from its neighbors are derived.The convergence of the algorithms is proved.The proposed distributed and recursive algorithms are easily implemented in real systems such as wireless channels.The algorithm for noisy observations requires knowledge of the co-variance of the noise,but in principle,it can be estimated on the basis of the observed data.For further research,it is of interest to consider the distributed blind channel identification for time-varying SIMO systems and for the multi-input multi-output(MIMO)systems.

      [1] R.Liu.Blind signal processing:an introduction.Proceedings of the 2nd IEEE International Symposium on Circuits and Systems,Atlanta:IEEE,1996:81–84.

      [2]D.Slock.Blind fractionally spaced equalization perfect reconstruction filter banks and multichannel linear prediction.Proceedings of the4th IEEE International Conference on Acoustics,Speech,and Signal Processing,Adelaide,Australia:IEEE,1994:585–588.

      [3]Y.Hua.Fast maximum likelihood for blind identification of multiple FIR channels.IEEE transactions on Signal Processing,1996,44(3):661–672.

      [4]F.Alberg,P.Duhamel,M.Nikolova.Adaptive solution for blind identification/equalization using deterministic maximum likelihood.IEEE Transactions on Signal Processing,2002,50(4):923–936.

      [5]E.Moulines,P.Duhamel,J.Cardoso,et al.Subspace methods for the blind identification of multichannel FIR filters.IEEE Transactions on Signal Processing,1995,43(2):516–525.

      [6]G.Xu,H.Liu,L.Tong,et al.A least-squares approach to blind channel identification.IEEE Transactions on Signal Processing,1995,43(12):2982–2993.

      [7]Q.Zhao,L.Tong.Adaptive blind channel estimation by least squares smoothing.IEEE Transactions on Signal Processing,1999,47(11):3000–3012.

      [8]H.F.Chen,X.Cao,J.Zhu.Convergence of stochastic approximation-based algorithms for blind channel identification.IEEE Transactions on Information Theory,2002,48(5):1214–1225.

      [9]H.Fang,H.F.Chen.Blind channel identification based on noisy observation by stochastic approximation method.Journal of Global Optimization,2003,27(2):249–271.

      [10]R.Abdolee,B.Champagne.Distributed blind adaptive algorithms based on constant modulus for wireless sensor networks.Proceedings of IEEE International Conference on Wireless and Mobile Communications,Valencia,Spain:IEEE,2010:303–308.

      [11]C.Yu,L.Xie,Y.Soh.Distributed blind system identification in sensor networks.Proceedings of IEEE International Conference on Acoustics,Speech and Signal Processing,Florence,Italy:IEEE,2014:5065–5069.

      [12]J.Lei,H.F.Chen.Distributed estimation for parameter in heterogeneous linear time-varying models with observations at network sensors.Communications in Information and Systems,2015,15(4):423–451.

      [13]J.Brewer.Kronecker products and matrix calculus in system theory.IEEE Transactions on Circuits and Systems,1978,25(9):772–781.

      [14]J.Lei,H.F.Chen.Distributed stochastic approximation algorithm with expanding truncations:algorithm and applications.arXiv,2014:arXiv:1410.7180.

      3 July 2017;revised 15 September 2017;accepted 15 September 2017

      DOIhttps://doi.org/10.1007/s11768-017-7086-x

      ?Corresponding author.

      E-mail:liurui14@mails.ucas.ac.cn.

      This paper is dedicated to Professor T.J.Tarn on the occasion of his 80th birthday.

      This work was supported by the National Key Basic Research Program of China(973 program,No.2014CB845301),and the National Center for Mathematics and Interdisciplinary Science,Chinese Academy of Sciences.

      ?2017 South China University of Technology,Academy of Mathematics and Systems Science,CAS,and Springer-Verlag GmbH Germany

      Appendix

      The distributed stochastic approximation algorithm with expanding truncations is proposed to seek roots of the sum of local functions in[14].Consider the sum function given by

      where f(i)(h):is called the local function assigned to agent i and can be observed only by agent:f(h)=0}denote the root set of f(·).For any i∈ V,according to[14]the estimate for G is generated by the following algorithm:

      We list the conditions to be used.

      D2)There exists a continuously differentiable function v(·):RN(L+1)→ R such that

      D3)The local functions f(i),?i∈ V are continuous.

      D4)For any i∈ V the noise sequencesatisfies

      b)along in dices{nk}wheneverconverges,

      ?Tk∈ [0,T]for any sufficient large K,where m(k,T)≤ T},?T > 0.

      Define the vectors

      Proposition 1(Theorem 3.3 in[14]) Letbe produced by(a1)–(a4)with an arbitrary initial value h0.Assume B2)and D1)–D3)hold.Then for the sample path ω for which D4)holds for all agents,the following assertions a)–c)take place:

      or in the compact form

      c)There exists a connected subset G?? G such that

      where 〈hk〉

      If a bounded region containing G is known,then the expanding truncations can be replaced by a fixed truncation bound.Let us assume that?x?< g1,?x∈G={x:f(x)=0},where g1is a known constant.Then we can replacein(a3)and(a4)with a constantwhere cis defined0in the D2)c).In this case,the algorithm(a1)–(a4)becomes a distributed stochastic approximation algorithm with a fixed truncation:

      The following Corollary1 directly follows from Proposition1.

      Corollary 1Assume?x?< g1,?x∈G={x:f(x)=0},and set g=g1∨c0.Letbe produced by a5)–a8)with an arbitrary initial value h0.Assume B2)and D1)–D3)hold.Then for the sample path ω for which D4)holds for all agents,the following assertions a)–c)take place:

      or in the compact form

      c)There exists a connected subset G?? G such that

      Rui LIUreceived her B.Sc.degree in Statistics from Nankai University in 2014 and M.Sc.degree in Operations Research and Cybernetics from Academy of Mathematics and Systems Science,Chinese Academy of Sciences in 2017.Her research interests lie in distributed algorithms and stochastic approximation and its applications to systems,control,and signal processing.E-mail:liurui14@mails.ucas.ac.cn.

      Han-Fu CHENis a Professor at the Key Laboratory of Systems and Control of Chinese Academy of Sciences.His research interests are mainly in stochastic systems,including system identification,adaptive control,and stochastic approximation and its applications to systems,control,and signal processing.He served as an IFAC Council Member(2002–2005),President of the Chinese Association of Automation(1993–2002),and a Permanent member of the Council of the Chinese Mathematics Society(1991–1999).He is an IEEE Fellow,IFAC Fellow,a Member of TWAS,and a Member of Chinese Academy of Sciences.E-mail:hfchen@iss.ac.cn.

      九龙坡区| 门头沟区| 财经| 莒南县| 闸北区| 义马市| 新津县| 新建县| 札达县| 宜章县| 天峻县| 凤山市| 高雄县| 临澧县| 兴义市| 龙江县| 泰来县| 合山市| 长阳| 南丰县| 梁山县| 汉中市| 玉田县| 普洱| 阜阳市| 锦屏县| 二连浩特市| 大港区| 莆田市| 左云县| 南开区| 饶平县| 施甸县| 韶山市| 商洛市| 香格里拉县| 景宁| 景德镇市| 连江县| 大同市| 中山市|