• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Lateral predictive coding revisited:internal model,symmetry breaking,and response time

    2022-10-22 08:15:18ZhenYeHuangXinYiFanJianwenZhouandHaiJunZhou
    Communications in Theoretical Physics 2022年9期

    Zhen-Ye Huang,Xin-Yi Fan,Jianwen Zhou and Hai-Jun Zhou,3

    1 CAS Key Laboratory for Theoretical Physics,Institute of Theoretical Physics,Chinese Academy of Sciences,Beijing 100190,China

    2 School of Physical Sciences,University of Chinese Academy of Sciences,Beijing 100049,China

    3 MinJiang Collaborative Center for Theoretical Physics,MinJiang University,Fuzhou 350108,China

    Abstract Predictive coding is a promising theoretical framework in neuroscience for understanding information transmission and perception.It posits that the brain perceives the external world through internal models and updates these models under the guidance of prediction errors.Previous studies on predictive coding emphasized top-down feedback interactions in hierarchical multilayered networks but largely ignored lateral recurrent interactions.We perform analytical and numerical investigations in this work on the effects of single-layer lateral interactions.We consider a simple predictive response dynamics and run it on the MNIST dataset of hand-written digits.We find that learning will generally break the interaction symmetry between peer neurons,and that high input correlation between two neurons does not necessarily bring strong direct interactions between them.The optimized network responds to familiar input signals much faster than to novel or random inputs,and it significantly reduces the correlations between the output states of pairs of neurons.

    Keywords:neural network,response dynamics,predictive coding,similarity,symmetry breaking

    1.Introduction

    The brain receives external and visceral sensory signals,extracts useful information out of them,and makes conscious or unconscious decisions on appropriate responses and actions all the time.Signal perception,decision making,and action triggering in the brain are carried out by a huge and complex network of interacting neurons,each of which integrates input signals and sends action spikes to other neurons.The brain,with a size of about 2% body mass,consumes about 20% of the body’s metabolic energy,and it must be under strong evolutionary pressure to reduce energy consumption [1].Predictive coding has been proposed as a general strategy to reduce the energy cost of information processing [2–4].

    Input signals to a neural network and the internal states of the network are produced by real-world physical or chemical processes,they are far from being completely random but are rich in internal structures at many temporal and spatial scales,and there is a huge amount of regularity in their temporal and spatial structures.Regularity means redundancy and it could be exploited to facilitate information processing and to reduce energy costs.First,the signals received by spatially adjacent neurons at a given time are positively correlated,and this local spatial correlation could be exploited to reduce the magnitude of response of the receiving neurons[2].Second,the sequence of signals received at a given neuron is locally correlated in time,and this local temporal correlation could again be exploited to make predictions about future events[5–7].Third,similar signal patterns (e.g.,faces or objects) are repeatedly received by a neural network over longer time scales,and the stable hierarchical relationship among them could be exploited to build a hierarchical internal model of the world [8–11].

    Figure 1.Lateral feedback interactions in a single layer of N neurons.The input signal si to a neuron i is converted to an output signal xi by a quick response dynamics (1).Lateral interactions are indicated by the horizontal arcs.The interaction strengths of two neurons i and j are quantified by a pair of synaptic weights wij and wji.

    Predictive coding has come to be an influential and promising framework in theoretical neuroscience for understanding information transmission and perception.It posits that the brain builds an internal model to perceive the external world(and also the visceral world),and constantly transmits prediction error messages among its constituent neurons to guide the refinement of this model.Previous theoretical studies on predictive coding have paid great attention to top-down feedback mechanisms.The system was often modelled by a hierarchical network consisting of many layers of neurons.Special neurons were introduced into the hierarchical network to compute and transmit prediction errors between adjacent layers in the network,and Bayesian inference was employed to refine a hierarchical internal model [3,9,11,12].For computational convenience,the lateral recurrent interactions between the neurons located in the same layer of the network were usually ignored in these earlier models.However,lateral interactions are ubiquitous in the biological brain.The mutual influences among the neurons in a single layer of the network strongly affect the state dynamics of these neurons in the short time scale,and they may then greatly affect perception and inference in the multi-layer network.Recent experimental and computational studies have demonstrated that the inclusion of within-layer interactions could dramatically change the performance of hierarchical neural networks (see,e.g.,[13–15]).Whether special prediction error computing neurons really exist in the brain is also a widely debated issue [16].

    The present work revisits the original concept of predictive coding within a single-layered neural network [2,17–19].We consider a first-order differential equation of neurons i responding to an external input with the help of peer neurons (equation (1) and figure 1).There is no need to introduce additional specialized neurons for computing prediction errors in our model.The internal state xiof neuron i serves the dual role of a prediction error,while the combined effect fi(x) of other neurons to this neuron is interpreted as a prediction (equation (2)).The synaptic weights wijof lateral interactions from neuron j to neuron i are gradually optimized(on time scales much longer than that of the elementary response dynamics) to reduce the average squared prediction error.We implement a gradient descent algorithm to accomplish the task of synaptic weight adaption.Our theoretical derivation indicates that,as some of the synaptic weights deviate from being zero with learning,the symmetry of the synaptic weights is gradually lost (wij≠wji).

    We apply this predictive coding model to the MNIST dataset of hand-written digits.Our numerical results confirm the spontaneous breaking of synaptic weight symmetry,and they also demonstrate that a high input correlation between two neurons does not necessarily mean there will be strong direct interactions between them in the optimized network.Other properties of the lateral predictive coding are also demonstrated,including the reduction of correlation among the responses of different neurons,and the attention mechanism of highlighting novel spots in the input signals.Of special interest is the response speed of the optimized perception system to familiar input signals which are much faster than unfamiliar or random inputs.We believe that lateral recurrent interactions are indispensable in understanding predictive coding in biological nervous systems.

    This paper is organized as follows.Section 2 describes the fast-time-scale response dynamics and introduces the synaptic matrix of lateral interactions.Section 3 defines the cost function to be minimized and derives the gradient descent iteration equations of the slow-time-scale adaptation of the synaptic weights.Section 4 reports the main numerical results obtained on the hand-written digits dataset.Finally,we conclude our work in section 5.

    2.Model

    2.1.Response dynamics and internal state

    We focus our attention on a single layer of neurons(figure 1).The N neurons in this layer are affected by external inputs,and they are also mutually affected by lateral interactions [2,17–19].We denote by sithe external input to a neuron i,and by xithe internal state of this neuron.If there is no external perturbation,the neurons stay in the quiescent state (xi=0).Upon receiving an input signals=(s1,…,sN)?,the internal state vectorx=(x1,…,xN)?is driven away from quiescence and reaches a new steady state quickly,on a time scale of milliseconds.We assume the following simple response dynamics,

    The term-x on the right-hand side of this expression is spontaneous relaxation,whose time constant is defined as unity.The vector functionf(x)=(f1(x),…,fN(x))?contains all the lateral feedback interactions between the N neurons.These lateral interactions are generally nonlinear in a biological neural network.Here,for simplicity,we restrict the discussion to linear interactions and assume that the interaction function fi(x) has the following form

    where wijis the synaptic weight of the directed interaction from neuron j to neuron i.The synaptic weights define a lateral interaction matrix W as

    Notice that all the diagonal elements are identical to zero,wii≡0.Self-loops (autapses) actually exist in some types of neurons and they could induce rich dynamical phenomena[20].In the present simplified model,we assume that the spontaneous relaxation term of equation (1) has already incorporated the effect of possible self-loops.

    In response to an external signal s,the internal state of the linear recurrent dynamics (2) at time t is

    where I is the identity matrix (Iii=1 for the diagonal elements,Iij=0 for i ≠j).The net driving force of this response at time t is simply dx/dt,and it is linearly depending on s,

    Notice that,for x(t)to be convergent in this linear model,the real part of every eigenvalue of the composite matrix(I+W)needs to be positive.These eigenvalue conditions are checked in our numerical computations.(These constraints on the synaptic matrix may be unnecessary if the feedback functions are nonlinear and bounded,such asfi(x)=a0tanh (∑j≠iwij xj)with a0being a positive constant.)

    2.2.Prediction and prediction error

    This single layer of neurons is tasked with processing many input signal vectors sα,with index α=1,2,…,P.The total number P of input samples is much larger than the total number N of neurons.Each of these input vectors sαwill be converted into an internal steady statexα=(xα1,…,xαN)?by the recurrent dynamics (1),as

    The internal representation xαis a linear transformation of sα.From this expression,we see that the input signal vector sαhas been decomposed into two parts,

    Figure 2.An example of perceptive illusion.The gray intensities of the 18 small square blocks are 0.6 for the two central squares;0.0(white) for the eight squares surrounding the left central square;1.0(black) for the eight squares surrounding the right central square.The synaptic weights from a square j to all its surrounding squares i are set to wij=0.05 and all the other synaptic weights are set to be zero.The perceived gray intensities of the left and right central squares are 0.624 and 0.254,respectively.

    The ith element of the first vector pα≡Wxαispαi=∑j≠iwijxαjand it is independent ofxαi.We can therefore interpretαpias the prediction by the other neurons concerning the input signalαsiof neuron i.Each neuron j contributes a termwijxαjto the predicted input signal at neuron i,and the column vector(w1,j,…,wj-1,j,0,wj+1,j,…,wN,j)?characterizes the predictive role of neuron j.We refer to pαas the prediction vector.Then,equation (7) indicates that the internal stateαxiof neuron i is also serving as a prediction error.When xαis received as input by another layer of neurons,it contains the residual properties of the signal vector sαthat has not yet been predicted by the single-layer internal model W.In other words,αxiis both an internal state of neuron i itself and a prediction error forαsi.This dual role might be of real biological significance,as it relieves the necessity of introducing extra neurons for the sole purpose of computing prediction error [11].In the literature,special error-computing neurons are commonly employed in hierarchical predictive coding network models [3].Such neurons may not be strictly necessary (and indeed the biological evidence in support of their existence is weak).Top-down predictive messages can be directly fed into the single-layer response dynamics.The simplest way is to add a term-h(t)into the right-hand side of equation (1),with h(t) being the higher-level prediction concerning the input s.

    The decomposition (7) will cause an interesting phenomenon of perceptive illusion.We explain this by a simple example,the gray image shown in figure 2.The eight small white squares surrounding the left central small square have gray intensity 0.0,the eight small black squares surrounding the right central small square have gray intensity 1.0,and the two central small squares have identical gray intensity 0.6.Let us set the weight wijfrom a square j to all its nearest surrounding squares i to be wij=0.05 and set all the other weights to be zero.Then according to equation (6),the output gray value of the left small square will be 0.624 and that of the right small square is 0.254.The linear predictive coding model‘perceives’the left central square to be much more darker than the right central square,even though their actual gray intensity is the same.This is an unconscious predictive perception,and this outcome is consistent with the conscious visual perception of humans.Unconscious predictive coding may be contributing to conscious optical illusions.We notice that recent computational investigations suggested that more complex visual illusions,such as the famous Kanizsa contours [21],could be quantitatively explained by hierarchical predictive coding models with feedback interactions [14].

    3.Prediction error minimization

    3.1.Mean squared prediction error

    For the linear predictive coding model with P input signal vectors sα,we define the mean squared prediction error ε as

    where xαis the prediction error vector corresponding to input sα.Asαxiis also the internal state of neuron i,the mean squared prediction error can also be understood as the mean metabolic energy needed to represent an external input.The synaptic weights wijof lateral interactions are modifiable on time scales much longer than that of the response dynamics(1).It is natural for us to assume that these synaptic weights will slowly adapt to the inputs to minimize the mean squared prediction error ε.

    Here we are interested in the properties of the optimized synaptic weights.To better appreciate the essence of this optimization task,we now rewrite the expression of ε in an alternative form.The empirical correlation matrix A for the N neurons is defined asA≡ (1P)sα sα?,with elements

    In matrix form,this means

    For any input signal vector sα,because of equation (10),we know that its internal representation is

    This expression has the same form as equation (10),but notice that the basis vectors zmare not necessarily orthogonal to each other and also that the squared norm‖z‖2≡zm?zm≠1.The coordinatescαmare fixed by the input dataset,and the optimization targets are then the N internal vectors zm.The mean squared prediction error (8) is

    In deriving equation (16) we have used the properties (11)and (12).The summation in equation (16) does not contain inner product termszm?znwith m ≠n.This fact means that the orthogonality of the internal vectors zmis not a necessary condition for the minimization of ε.

    The two equivalent expressions (16) and (17) reveal that the mean squared prediction error does not depend on the details of the P input signal vectors but only on the eigenvalues and eigenvectors of the correlation matrix A.For the leading(largest)eigenvalues λm(m=1,2,…),it is desirable to reduce the squared norm of the corresponding internal vectors zm.On the other hand,there is no such necessity to optimize an internal vector znif the eigenvalue λnis close to zero.The first eigenvector u1and eigenvalue λ1may be strongly related to the mean vector (1/P)∑αsαof the input signals.The remaining eigenvectors and eigenvalues are mainly related to the covariance of the input signal vectors.We expect that the leading termsλm(zm)2of equation (16) with m≥2 will be roughly equal,i.e.,‖zm‖~λ-m12for m=2,3,… as a result of optimization.Becausecαm~λ1m2according to equation (11),the projections of the internal state xαin the different directions≡zm‖zm‖of m=2,3,… will be comparable in magnitude.Suppose only a few numbers(say M ≈100)of λmvalues are important.Then,according to equation (15),the internal representation xαwill be

    where≡cαm‖zm‖.The magnitudes of the coefficientsfor 2≤m≤M will be roughly equal if the scaling property‖zm‖~1is valid.The N internal direction vectorsmay not be strictly orthogonal to each other,but instead the angles between them may slightly deviate from

    3.2.Evolution of synaptic weights

    We minimize the mean squared prediction error (17) by the method of gradient descent,under the constraint that the real parts of all the eigenvalues of (I+W) are positive.In addition,considering that maintaining a nonzero synaptic weight has a metabolic cost,we introduce a quadratic energy term to each synaptic weight.The total cost function of the minimization problem is then

    where η is an adjustable penalty parameter(the scaling factor N-1ensures that the two summation terms in the above expression are of the same order,that is,proportional to N).

    The first derivative of this cost function with respect to synaptic weight wijis

    In deriving this expression,we have used the following two relations

    To minimize the total cost C by gradient descent,we modify all the synaptic weights wij(i ≠j)simultaneously according to

    where γ is a small learning rate.

    From the expression (20) of cost gradients,we observe that

    although the correlation matrix A is symmetric.Then the adaptation of wijand wjifollowing (23) and starting from wij=wji=0 will lead to breaking of symmetry between these two synaptic weights,that is,wij≠wji.We have checked by exact computation that this spontaneous symmetry-breaking phenomenon occurs even if there are only two neurons,N=2.

    4.Numerical results

    We apply the lateral predictive coding model to a widely used real-world dataset,the MNIST dataset of hand-written digits[22],with the purpose of gaining some empirical insights into the effects of lateral recurrent interactions.There are P=60000 gray images of 28×28 pixels for the ten digits,each of which serves as an input vector (sα).We attach a neuron to each of the N=784 pixels,and neurons and pixels will be mentioned interchangably in this section.The original pixel values are integers ranging from 0 to 255.Here we linearly re-scale these values to the range [0,1].The mean input vector,denoted asis a positive vector.The mean prediction vector and the mean prediction error vector are denoted byand,respectively.Naturally,these three mean vectors satisfy the relation

    For the convenience of later discussions,we define the(cosine) similarity q(v,y) of two generic m-dimensional vectorsv=(v1,…,vm)?andy=(y1,…,ym)?as

    This similarity index measures the angle between v and y.For example,if v and y point in the same direction,then q(v,y)=1;if they are orthogonal to each other,then q(v,y)=0.

    4.1.Learning procedure

    The linear predictive coding model requires all the eigenvalues of the composite matrix (I+W) to have a positive real part,so we numerically check all the eigenvalues of this matrix every T iteration steps (epochs) during the evolution process of equation (23).The inspection interval is initially set to be T=1000.After the eigenvalue constraints are checked to be violated for the first time,the value of T is reduced to T=100 and then fixed to this value.Each time the matrix(I+W)is checked to have at least one eigenvalue with a negative real part,the learning rate is reduced by half(γ ←γ/2),and the synaptic matrix W is also properly reset to carry out the next T evolution epochs.One biologically plausible way of resetting the synaptic weight matrix is by homeostatic scaling-down of all its elements by the same ratio[23].Another simple way is to simply reset the weight matrix to the matrix W that was attained and recorded T epochs earlier.The numerical results reported in this section were obtained by the second resetting method,but we have checked that the final numerical values of the synaptic weights are not sensitive to the particular method used to guarantee the eigenvalue condition,nor to whether the iteration (23) was performed synchronously or in random sequential order.The learning rate is initially set to be γ=0.001.We train the network using PyTorch (version 1.10.0) and Python (version 3.9.7),which are quite convenient for matrix manipulations.

    We consider three representative values for the penalty parameter: strong penalty,η=50;moderate penalty,η=10;weak penalty,η=1.For η=50 we find that all the eigenvalues of(I+W)never violate the positivity condition during the whole evolution process,while weight matrix resettings are needed at η=10 and η=1.We find that the properties of the systems obtained at different values of η are actually very similar qualitatively.

    Figure 3.(a)Evolution of the mean squared prediction error ε(relative to the initial value before weight adaptation).(b)The eigenvalues λm of the correlation matrix of the input data samples sα and of the correlation matrix of the internal states xα.(c)The values of ‖ zm ‖where λm is an eigenvalue of the correlation matrix of the input data samples.The weight penalty parameter is η=50,10,or 1.

    Figure 4.Histogram of similarity values between 60 000 vectors.Solid curves:input signal vectors s α(a)and the mean-corrected ones Δ sα=sα- (b);dashed curves: prediction vectors pα (a) and the mean-corrected ones Δp α=pα- (b);dot-dashed curves: prediction error vectors xα (a) and the mean-corrected ones Δ xα=xα- (b).The weight penalty is η=1.

    Figure 3(a) shows the decay curves of the mean squared prediction error ε with the learning epoch.At a large weight penalty (η=50) the value of ε decreases to a final value which is about 0.23 of the initial value in about 104epochs.At moderate or low weight penalty (η=10 or η=1) it takes longer for ε to saturate,but the final value is considerably lower (≈0.16 of the initial value).Resetting or rescaling of the weight matrix is needed at η=10 or η=1.The first 500 or so eigenvalues of the correlation matrix A(equation(9))of the input vectors sαand the corresponding eigenvalues of the correlation matrix of the internal states xαare shown in figure 3(b) for comparison.The first eigenvalue λ1of the correlation matrix is much larger than the second eigenvalue λ2.This is mainly caused by the fact that the mean input vectoris a relatively large positive vector.We see that,as a consequence of the decomposition (7),the first 100 eigenvalues of the internal states are considerably reduced as compared with those of the input data.The relationship‖zm‖~is confirmed to be roughly true for m≥2(figure 3(c)).

    The distribution of similarity q(sα,sβ) between two input images of the MNIST dataset,and the corresponding distributions of similarity q(pα,pβ)and q(xα,xβ)are compared in figure 4(a).The distribution of q(xα,xβ) is sharply peaked around zero,suggesting that the prediction error vectors xαof the 60 000 data samples are approximately orthogonal to each other.This is a clear demonstration of redundancy reduction in xα.In comparison,we find that both q(sα,sβ) and q(pα,pβ)peaked at relatively large positive values,which may be mainly due to the fact that the mean input vectorand the mean prediction vectorare both nonzero.If we subtract the mean vectors to get the mean-corrected vectors (Δsα=sα-,Δpα=pα-,and Δxα=xα-),the similarity distributions of Δsαand Δpαboth are shifted to be peaked close to zero and also their standard deviations become slightly broader(figure 4(b)).On the other hand,this mean-correction treatment does not have a significant effect on the prediction error,probably because the mean vectorare already quite small(the mean value of its N elements is about 0.01 at η=1).

    4.2.Nonsymmetry and sparsity of synaptic weights

    The gradient descent dynamics start from the all-zero synaptic weight matrix W (wij=wji=0 for all the pair-wise interactions).Our simulation results confirm the theoretical expectation of section 3.2 that the symmetry property of W breaks down as a result of optimisation(figure 5),and wij≠wji.This nonsymmetry could be quite large for some of the neuron pairs.As an example,consider a neuron i located at the central pixel (14,14) in figure 6 and a neighboring neuron j located at pixel (15,14).These two neurons are highly correlated in MNIST,with Aij=0.304.At η=1 we find that wij=0.145 while wji=0.570,which means that the state of neuron i has a strong direct effect on that of neuron j but the opposite is not true.

    We quantify the average degree of nonsymmetry by the following ratio к

    At a strong synaptic penalty (η=50) the nonsymmetry ratio is relatively small (к=0.278);as the synaptic penalty is lowered to η=10,the nonsymmetry ratio increases to a relatively large value of к=0.514;further decreasing the penalty to η=1 only has a tiny effect on the nonsymmetry ratio (к=0.476).As the penalty value η decreases,the synaptic weights have more flexibility to take larger values.Figures 5(b) and (c) clearly demonstrate that,if the synaptic weight wijfrom neuron j to neuron i is large enough(wij>0.3),the reverse synaptic weight wjifrom i to j is highly likely to be much smaller,with the sum wij+wjibeing roughly a constant value.

    Figure 5.Symmetry breaking of the synaptic weight matrix W.Here wij is the synaptic weight from neuron j to neuron i.The penalty parameter is η=50 (a),η=10 (b),and η=1 (c).

    Figure 6.Receptive fields wij and projection fields wji for two neurons i,whose positions at(4,14)and(14,14)are marked by the‘*’ symbols.The weight penalty parameter is η=1.(a) The input synaptic weights wij to the focal neuron i from all the other neurons j.(b)The output synaptic weights wji from the focal neuron i to all the other neurons j.

    We define the lateral receptive field of a neuron i as the subset of other neurons j with their synaptic weights wijto neuron i significantly deviating from zero.For the twodimensional MNIST system,we find that the receptive field of each neuron i is considerably localized and is sparse:only a few of the input synaptic weights wijare distinctively large and the afferent neurons j are spatial neighbors (figure 6(a)).Both the sparsity property and the locality property may be a consequence of the fact that the correlations in the MNIST system are mostly contributed by spatially neighboring pixels.For the MNIST dataset,all the elements Aijof the correlation matrix are non-negative,and non-surprisingly,all the largemagnitude synaptic weights wijare positive.

    Similarly,the lateral projection field of a neuron i is defined as the subset of other neurons j to which the synaptic weights wjiare significantly distinct from zero.Same as the receptive fields,the projection field of a neuron is also sparse and spatially localized (figure 6(b)).Because of the nonsymmetric property,however,the projection field of a neuron i are not identical to its receptive field.For example,the central pixel i at (14,14) is mostly affected by the pixel j at(13,14) and the synaptic weight is wij=0.520,but pixel i affects mostly the two neurons k at (15,14)and l at (14,15),with synaptic weights wki=0.570 and wli=0.571.Figure 6 also indicates that at the central pixel i there is a strong directional flow of influence from the north side to the eastern and southern sides.Qualitatively similar directional motifs might be common in the biological brain.

    For some of the neurons located close to the boundary of the square region,we find that their receptive and projection fields are both empty.In other words,the synaptic weights from other neurons j and to other neurons are both vanishing(wij≈0 and wji≈0).These neurons are therefore isolated from the other neurons.We find that this isolation is due to the fact that such a neuron i is almost always quiescent(si=0).

    In a natural environment,the visual signals to the retina of an animal are strongly redundant and locally correlated.The locality of the synaptic weights in visual signal processing systems has been well documented [2,8].Quantitative experimental investigations on the degree of nonsymmetry к in these lateral interactions may be an interesting experimental issue.It may also be possible that different regions of the cerebral cortex will have different values of the nonsymmetry index к.

    4.3.Neuron pair-wise similarity

    For two P-dimensional vectorsi=(s11,…,sP1)?andsj=(s12,…,sP2),one on neuron i and the other on neuron j,the similarity q(si,sj) between them is computed as

    Figure 7.Synaptic weights,input correlations and internal correlations.(a–c,top row) Relationship between synaptic wij and q(si,sj) (the similarity of input signals at neurons i and j).(d–f,bottom row)The relationship between q(xi,xj)(the similarity of internal representations at i and j)and input similarity q(si,sj).The dotted lines mark the hypothetical linear relation q(xi,xj)=q(si,sj).In drawing this figure,we only include neurons whose input signals are sufficiently active,that is,the input signals αsi for such a neuron i should be nonzero in at least 100 of the 60 000 sample digital patterns α.

    This similarity is related to the input correlation Aijby

    In other words,q(si,sj) is a re-scaled correlation of the input signals at neurons i and j.

    The top row of figure 7 shows the relationship between the input similarity q(si,sj) of two neurons i and j and the synaptic weights wijand wji.There is a clear trend of wijincreasing with q(si,sj),which is naturally anticipated.A large value of synaptic weight wijimplies a large value of similarity q(si,sj).Very interestingly,however,the reverse is not necessarily true.The synaptic weight wijor wji(or both) could be very close to zero even if the similarity q(si,sj) is quite large.For example,the input similarity of pixel i at(23,14)and pixel j at(26,11)is q(si,sj)=0.238 while both wijand wjiare very small(≈2×10-5).This means that the lateral neural network may choose to predict the input signal of a neuron i based on the internal states of a few(but not all) of the most highly correlated neurons j.The underlying reason might be the redundancy of information in the input signals.If the input signals of both neurons j and k are good predictors of that of neuron i,one of the synaptic weights wijand wikmay be spared to reduce synaptic energy.

    In some sense,the lateral neural network attempts to explain the complicated correlations of the input signal vectors by a few ‘direct’ interactions.This is similar to recent work on direct coupling analysis in neural sequences and protein sequences,which also tried to distinguish between direct interactions and indirect transmission of correlations [24,25].

    The internal statesαxiof a neuron i depend on the input pattern and they also form a P-dimensional vectorxi=(x1i,…,xPi)?.The similarity q(xi,xj) between the internal vectors of two neurons is

    As demonstrated in the bottom row of figure 7,the internal similarity between two neurons i and j is much smaller than the input similarity between them,that is,

    To be more quantitative,the mean value of q(si,sj) averaged over all the neuron pairs is 0.176,while the mean value of|q(xi,xj)|is only 0.023 at η=1.Clearly,as a consequence of predictive learning,the correlations among the internal states of different neurons are much reduced in comparison with the strong input correlations.This is a known advantage of predictive coding [3].

    There are still considerable correlations between the internal states of many neurons and the internal similarities q(xi,xj) between these neurons are quite distinct from zero.An interesting idea might be to take the internal state vectors xαas input training signals to another laterally connected layer of predictive coding neurons.This hierarchical sequence may need to be extended to more layers,until the output vectors are formed by mutually independent elements.In this way,hierarchical predictive coding becomes a renormalization model[26–28].There may be only a few elements of the final output vector that are significantly different from being zero,and they may offer an obvious classification of the initial input digital pictures.This idea needs to be explored in the future.

    Figure 8.Examples of input signal vectors sα (left column),the predicted signals pα=Wxα (middle column),and the prediction errors xα(right column).The top left sample is an intact symbol ‘5’,while the bottom left sample is an occluded version with some pixels of high intensities being changed to zero intensity(white).The occluded pixels are indicated by a small‘★’in the bottom row.The network is trained with weight penalty η=1.

    4.4.Surprisal,attention,and prediction

    We present in the top row of figure 8 the result of the response dynamics obtained for a randomly chosen image sample sα(a digit 5).The prediction error (xα) and prediction (pα≡Wxα)vectors of this example share some common features with the results obtained on the other samples of the MNIST dataset.First,we find that the predictions pαare visually quite similar with the input signal sα.For instance,at weight penalty η=1 the similarity between these two N-dimensional vectors,

    has a high value of 0.93±0.03,averaged over all the 60 000 digital samples.The optimized synaptic weight matrix W could explain the input correlations with high precision.

    Second,we observe that the magnitude∣xαi∣of the prediction error is often most significant at the boundary pixels of the original digit symbols,and the spatial gradients ofαxiat this boundary pixels are comparatively large and the signs ofαxialso change at these pixels i.In other words,the prediction error vector xαhighlights the boundary separating the digital symbol and the background.The neurons correspond to the interior pixels of the symbol and to the regions far-away from the symbol often have a lower magnitude of internal responses.The prediction errorαxiis the level of surprise with which a neuron i feels about the input signal sα.A relative large magnitude ofαximay help the neural system to pay special attention to the pixel corresponding to neuron i.This is a simple attention mechanism of novelty detection,and in our present model it does not involve the transfer of top-down messages from the higher hierarchical neural layers.

    Prediction and novelty detection are most clearly manifested for input digital signals sαof which some pixels are occluded (namely,the original nonzero input pixel valuesαsiare artificially set to be zero).We find that even with many pixels being occluded,the network could still offer a highly satisfying prediction(pα)for the original intact image(bottom row of figure 8).On the other hand,the prediction errors are the most significant for the occluded pixels,which could guide attention to these regions.These simulation results on occluded input signals demonstrate clearly the dual role of xαbeing both an internal model (by combining W) and a prediction error vector.

    How will a biological brain possibly take advantage of decomposition (7) to facilitate perception and action? Maybe the prediction pαand the prediction error xαwill be transmitted through different paths to different higher-level processing units.The possible biological significance of this needs to be explored more deeply.We notice that there are actually at least two pathways of the visual information process in the human brain [29].One of these pathways (the dorsal visual pathway)is responsible for unconscious blindsight [30],and a ventral visual pathway is more closely related to conscious perception.

    Figure 9.Response time of predictive coding.The weight matrix is trained at penalty value η=1.(a)The average decay behavior of the magnitude of the difference vector [s-(x+Wx)](rescaled by the magnitude of the input signal s)wth time t.The solid line is obtained for the MNIST images and the dot-dashed line is obtained for the shuffled MNIST images,while the dashed line is obtained for W=0(no interactions).The horizontal thin line marks the level 1/e.(b)The histogram of response times τ obtained for the MNIST images.

    4.5.Response time variations

    The predictive coding and perception system counteracts an input signal vector s by the combined effect of internal state x and prediction Wx.Figure 9(a) reveals the averaged decay behavior of the magnitude of the difference vector[s-(x+Wx)]with time.We define the response time τ(s)of the dynamics(1)to input signal s as the earliest time at which the magnitude of the difference vector becomes less than 1/e of the initial magnitude‖s‖.According to equation(5),then τ(s) is determined by the equation

    If there are no feedback interactions,the response dynamics will be purely exponential and the response time would be the same for any input vector s,and τ(s)=1.The response time is much reduced by the introduction of optimized feedback interactions.For the synaptic weight matrix attained with a high penalty (η=50) the mean response time is τ=0.41±0.06 among all the MNIST image samples.This mean response time is further reduced to τ=0.33±0.06 at moderate penalty η=10 and to τ=0.31±0.07 at low penalty η=1.

    For each MNIST image vector s we randomly exchange the positions of its elements (si?sjfor pairs of randomly chosen indices i and j) and feed the shuffled vector to the network.Very interestingly,we find the response time of the network to such a maximally randomized input is not reduced but rather is increased beyond unity (figure 9(a)).This indicates that the recurrent network has the ability to distinguish familiar inputs on which the weight matrix is trained from unfamiliar or novel inputs.

    Figure 10.Then ten MNIST images with the shortest response times(a),and the ten images with the longest response times (b).The weight matrix is optimized with a penalty value η=1.

    The response times for the original image vectors also differ considerably,ranging from τ=0.15 to τ=0.73 at η=1 [figure 9(b)].The ten images s with the shortest response times are shown in figure 10(a),all of which are found to have very high similarity with the averaged inputs,with valuesq(sα,) >0.9.On the other hand,we find that the similarity of the averaged inputwith the leading eigenvectorof the synaptic weight matrix is very large,=0.9992at η=1.Then it is easy to understand why these images will be quickly responded by the predictive coding dynamics.The ten images with the longest response times are also shown in figure 10(b),which are all quite thin and are obviously distinct from the images in figure 10(a).We find these later images are only weakly aligned withand(the similarity valueq(sα,) ≈0.3).

    The synaptic weights of our network are not trained explicitly to reduce response time.So this elevated response to familiar input signals should be regarded as an extra benefit of predictive perception.The ability to respond quickly to external stimuli is highly desirable in the animal world.The response time τ(s)could be used as a measure of the typicality of the input vector s.According to figure 10(b) the input samples with response time τ ≈0.25-0.4 may be regarded as typical inputs,while those with τ<0.2 or τ >0.6 may be considered untypical ones.

    5.Concluding remarks

    We studied lateral feedback interactions in a simple model of neural response dynamics(1)from the perspective of predictive coding.Lateral interactions between two neurons were implemented through the synaptic weights wijof the linear response function (2).An optimization problem was formulated to minimize prediction errors,and the method of gradient descent was adopted to evolve the synaptic weights towards near-optimal values.We applied our optimization algorithm to the MNIST dataset of hand-written digits.Our empirical results demonstrated the following four major properties of lateral predictive coding:first,the symmetry of interactions is broken in the synaptic weight matrix,with the degree of nonsymmetry к being significantly positive (equation (25));second,the similarity between the internal states xiand xjof neurons i and j are significantly reduced as compared to the similarity of the input signals siand sj(equation (29));third,strong correlations between two neurons i and j do not necessarily mean large synaptic weights between these neurons;and fourth,the response time to familiar input signals is significantly shortened(figure 9).

    These properties of predictive coding may be highly relevant for information processing in biological neural systems.A natural extension of the present model is a multilayered hierarchical neural network will lateral interactions at individual single layers and feedforward and feedback interactions between adjacent layers.The whole network of the present model could serve as a single layer for a multilayered hierarchical neural information processing system.We did not address the possible effects of lateral interactions in tasks such as data classification and memory retrieval,but these are interesting issues for continued investigations [13,15,31].

    The linear feedback interactions (2) are surely too simplistic for biological neurons.The firing rate of a biological neuron is a highly nonlinear and bounded function of the input signals,and irrelevant information may be lost during the coding and relaying process.Some of the most widely adopted nonlinear functions for theoretical analysis are the logistic function fi(x)=1/(1+e-x) and the hyperbolic tangent functionfi(x)=tanh(x)[17,18].The introduction of nonlinearity may bring much-enhanced competition among the internal states of different neurons,and consequently,it may dramatically affect the learned synaptic weights and change the statistical properties of the internal presentations x.It may be helpful to start with the extremely nonlinear Heaviside threshold response fi(x)=Θ(x-θi),with θibeing the activation threshold of neuron i,to explore the effects of nonlinear lateral interactions in predictive coding.

    Another rewarding direction is to consider spiking neurons that are biologically more realistic [16].The simple noise-free response dynamics(1)then will be replaced by the more complicated and stochastic integrate-and-fire dynamics of spiking neurons.Much future work is needed to understand the effect of lateral feedback interactions in predictive coding neural networks and the competition and cooperation between lateral and top-down feedback interactions.

    Acknowledgments

    This work was supported by the National Natural Science Foundation of China (Grant Nos.11975295 and 12047503),and the Chinese Academy of Sciences (Grant Nos.QYZDJSSW-SYS018,and XDPD15).We thank Dr.Qinyi Liao for sharing her GPU computing resource with us.

    国产精品1区2区在线观看. | 国产免费现黄频在线看| 男人的好看免费观看在线视频 | 一级毛片精品| 一区二区三区激情视频| 久久中文看片网| 一夜夜www| 99精品久久久久人妻精品| 久久人妻av系列| www日本在线高清视频| 自线自在国产av| 少妇被粗大的猛进出69影院| 天天操日日干夜夜撸| 男人操女人黄网站| 精品国内亚洲2022精品成人 | 久久国产精品大桥未久av| 国产色视频综合| 无人区码免费观看不卡| 伦理电影免费视频| 我的亚洲天堂| 老司机亚洲免费影院| 亚洲全国av大片| 捣出白浆h1v1| 久久久国产成人免费| 欧美久久黑人一区二区| 亚洲精品国产区一区二| 99国产精品免费福利视频| 日韩人妻精品一区2区三区| 99久久综合精品五月天人人| 一本大道久久a久久精品| 亚洲国产欧美日韩在线播放| 五月开心婷婷网| a在线观看视频网站| www.熟女人妻精品国产| 中文亚洲av片在线观看爽 | 国产亚洲欧美98| 国产精品秋霞免费鲁丝片| 国产视频一区二区在线看| 老司机在亚洲福利影院| 757午夜福利合集在线观看| 亚洲精品成人av观看孕妇| 亚洲午夜精品一区,二区,三区| 国产在视频线精品| 久久久久国产精品人妻aⅴ院 | 伊人久久大香线蕉亚洲五| 日本五十路高清| 精品国产一区二区三区四区第35| 国产精品久久久av美女十八| 久久久精品免费免费高清| 人妻 亚洲 视频| 国产亚洲欧美在线一区二区| 欧美丝袜亚洲另类 | 亚洲av日韩精品久久久久久密| 久久香蕉国产精品| av有码第一页| 91成年电影在线观看| 久久精品国产亚洲av高清一级| 亚洲精品中文字幕一二三四区| 91av网站免费观看| a在线观看视频网站| 三级毛片av免费| 一级片'在线观看视频| √禁漫天堂资源中文www| 黄色a级毛片大全视频| 成年版毛片免费区| 天堂动漫精品| 操美女的视频在线观看| 97人妻天天添夜夜摸| 国产主播在线观看一区二区| 国产精品久久久久久人妻精品电影| 性少妇av在线| 久久精品亚洲精品国产色婷小说| 极品少妇高潮喷水抽搐| 99精国产麻豆久久婷婷| 老司机午夜十八禁免费视频| 国产亚洲精品一区二区www | 久久久久精品国产欧美久久久| 美女国产高潮福利片在线看| a级片在线免费高清观看视频| 国产精品久久视频播放| 脱女人内裤的视频| 午夜视频精品福利| 午夜免费观看网址| 久久人妻熟女aⅴ| 夜夜爽天天搞| 日韩一卡2卡3卡4卡2021年| 久久久久视频综合| 夜夜夜夜夜久久久久| 18禁裸乳无遮挡动漫免费视频| 成人影院久久| 两个人看的免费小视频| 精品卡一卡二卡四卡免费| 精品国产国语对白av| 亚洲国产精品合色在线| 日韩中文字幕欧美一区二区| 国产一区在线观看成人免费| 日本黄色日本黄色录像| 午夜福利免费观看在线| 香蕉丝袜av| a在线观看视频网站| 很黄的视频免费| www.精华液| 久久精品亚洲精品国产色婷小说| 麻豆成人av在线观看| 亚洲少妇的诱惑av| 99热只有精品国产| 国产成人精品无人区| 制服诱惑二区| √禁漫天堂资源中文www| 国精品久久久久久国模美| 不卡av一区二区三区| 身体一侧抽搐| 午夜福利欧美成人| 成人免费观看视频高清| 免费观看人在逋| 69精品国产乱码久久久| 欧美一级毛片孕妇| 精品高清国产在线一区| 麻豆成人av在线观看| 亚洲欧美色中文字幕在线| 欧美黑人欧美精品刺激| 亚洲av第一区精品v没综合| 精品熟女少妇八av免费久了| 欧美中文综合在线视频| 精品卡一卡二卡四卡免费| 一级a爱视频在线免费观看| videos熟女内射| 亚洲av美国av| 一夜夜www| 美女 人体艺术 gogo| 亚洲精品国产色婷婷电影| 欧美日韩国产mv在线观看视频| 国产视频一区二区在线看| 在线观看免费视频网站a站| 一个人免费在线观看的高清视频| 国产精品成人在线| 国产精品一区二区在线不卡| 欧美日韩av久久| 99re在线观看精品视频| 老鸭窝网址在线观看| 黄色丝袜av网址大全| 一级黄色大片毛片| 美女高潮喷水抽搐中文字幕| 这个男人来自地球电影免费观看| 亚洲精品乱久久久久久| 老司机亚洲免费影院| 午夜免费成人在线视频| 亚洲成av片中文字幕在线观看| 51午夜福利影视在线观看| 少妇的丰满在线观看| 成年版毛片免费区| 国产精品免费一区二区三区在线 | 女人被狂操c到高潮| 久久国产亚洲av麻豆专区| 怎么达到女性高潮| 亚洲成人免费电影在线观看| 一区二区日韩欧美中文字幕| 欧美日韩乱码在线| 日本撒尿小便嘘嘘汇集6| 国产高清国产精品国产三级| 99国产精品一区二区三区| 黄色 视频免费看| 日韩欧美三级三区| 最近最新免费中文字幕在线| 亚洲成国产人片在线观看| 人人澡人人妻人| 伦理电影免费视频| 搡老熟女国产l中国老女人| 九色亚洲精品在线播放| 午夜免费成人在线视频| 国产欧美日韩精品亚洲av| 久久精品国产亚洲av高清一级| 十八禁人妻一区二区| 精品欧美一区二区三区在线| 夜夜躁狠狠躁天天躁| 欧美不卡视频在线免费观看 | 成人影院久久| 精品久久久精品久久久| 大型黄色视频在线免费观看| 久久人妻福利社区极品人妻图片| 久久午夜综合久久蜜桃| av天堂在线播放| 久久久久久久久久久久大奶| 成人精品一区二区免费| 欧美激情 高清一区二区三区| 精品熟女少妇八av免费久了| 免费日韩欧美在线观看| 宅男免费午夜| 婷婷丁香在线五月| 成人国产一区最新在线观看| 黄片大片在线免费观看| 不卡av一区二区三区| 一本一本久久a久久精品综合妖精| 欧美色视频一区免费| 一本大道久久a久久精品| 另类亚洲欧美激情| x7x7x7水蜜桃| 国产视频一区二区在线看| 午夜福利在线免费观看网站| √禁漫天堂资源中文www| 丰满饥渴人妻一区二区三| 精品人妻熟女毛片av久久网站| 久久久精品国产亚洲av高清涩受| 国产免费男女视频| 国产精品九九99| 老司机福利观看| 妹子高潮喷水视频| 极品教师在线免费播放| 看免费av毛片| 久久天堂一区二区三区四区| 大码成人一级视频| 欧美日韩亚洲高清精品| 叶爱在线成人免费视频播放| 国产激情欧美一区二区| 黄色片一级片一级黄色片| 欧美日韩av久久| 日本黄色日本黄色录像| 欧洲精品卡2卡3卡4卡5卡区| 一进一出抽搐gif免费好疼 | 国产区一区二久久| 午夜视频精品福利| 亚洲国产欧美一区二区综合| 一本综合久久免费| 视频区欧美日本亚洲| 国产成人啪精品午夜网站| 久久人人97超碰香蕉20202| 亚洲五月天丁香| 亚洲成人手机| 国产精品香港三级国产av潘金莲| 国产精品久久久久久人妻精品电影| 黄色a级毛片大全视频| 国产在线观看jvid| 精品国产一区二区久久| av免费在线观看网站| 91成年电影在线观看| 成人影院久久| 精品亚洲成国产av| 久久精品成人免费网站| 亚洲国产精品合色在线| 国内毛片毛片毛片毛片毛片| 日日摸夜夜添夜夜添小说| 午夜福利乱码中文字幕| 精品久久蜜臀av无| 十八禁高潮呻吟视频| 国产色视频综合| 亚洲国产精品合色在线| 香蕉丝袜av| 怎么达到女性高潮| 亚洲,欧美精品.| 中出人妻视频一区二区| 国产野战对白在线观看| 99久久国产精品久久久| 欧美亚洲日本最大视频资源| 亚洲人成电影观看| 欧美精品亚洲一区二区| 大陆偷拍与自拍| 多毛熟女@视频| 纯流量卡能插随身wifi吗| 免费人成视频x8x8入口观看| 好看av亚洲va欧美ⅴa在| 99re6热这里在线精品视频| 亚洲成国产人片在线观看| 12—13女人毛片做爰片一| 国产精品 国内视频| 美国免费a级毛片| 亚洲精品美女久久av网站| 午夜久久久在线观看| 伊人久久大香线蕉亚洲五| 欧美日韩成人在线一区二区| 热99国产精品久久久久久7| 亚洲熟妇中文字幕五十中出 | 热re99久久国产66热| 日韩欧美一区二区三区在线观看 | 999久久久精品免费观看国产| 狠狠婷婷综合久久久久久88av| 国产日韩一区二区三区精品不卡| 免费黄频网站在线观看国产| 制服诱惑二区| 动漫黄色视频在线观看| 日韩大码丰满熟妇| 黄色成人免费大全| av网站免费在线观看视频| 91精品国产国语对白视频| 人妻 亚洲 视频| 亚洲人成伊人成综合网2020| 国产无遮挡羞羞视频在线观看| а√天堂www在线а√下载 | cao死你这个sao货| 热99国产精品久久久久久7| 捣出白浆h1v1| 欧美另类亚洲清纯唯美| 天堂俺去俺来也www色官网| 丰满的人妻完整版| 夜夜爽天天搞| 国产激情久久老熟女| 无限看片的www在线观看| 一区二区日韩欧美中文字幕| 久久九九热精品免费| 在线国产一区二区在线| 国产成人系列免费观看| 18禁裸乳无遮挡免费网站照片 | 欧美日韩亚洲国产一区二区在线观看 | 美女福利国产在线| 亚洲精品在线观看二区| av网站免费在线观看视频| 热re99久久国产66热| 黄片小视频在线播放| 亚洲精品国产一区二区精华液| 老熟女久久久| 亚洲人成电影免费在线| 久久久久久人人人人人| 精品熟女少妇八av免费久了| 中文字幕制服av| 欧美精品一区二区免费开放| 曰老女人黄片| 男女之事视频高清在线观看| 99国产精品一区二区三区| 日韩视频一区二区在线观看| 99久久99久久久精品蜜桃| 久久香蕉国产精品| 少妇猛男粗大的猛烈进出视频| 国产不卡av网站在线观看| 熟女少妇亚洲综合色aaa.| 欧美av亚洲av综合av国产av| 亚洲一区高清亚洲精品| 午夜免费成人在线视频| 国产成人免费观看mmmm| 精品久久久久久,| 极品人妻少妇av视频| 老司机靠b影院| 中文欧美无线码| 变态另类成人亚洲欧美熟女 | 宅男免费午夜| 两个人看的免费小视频| 99国产精品免费福利视频| 男女高潮啪啪啪动态图| 久久天堂一区二区三区四区| 成人18禁高潮啪啪吃奶动态图| 丁香欧美五月| 国产亚洲欧美精品永久| 在线视频色国产色| 操美女的视频在线观看| 精品久久久久久,| 两人在一起打扑克的视频| 老熟女久久久| bbb黄色大片| 久久婷婷成人综合色麻豆| 桃红色精品国产亚洲av| 国产亚洲一区二区精品| 中文欧美无线码| 动漫黄色视频在线观看| 一区在线观看完整版| 亚洲一区中文字幕在线| 国产精品 国内视频| 色老头精品视频在线观看| 欧美日韩亚洲综合一区二区三区_| 成人影院久久| 一a级毛片在线观看| 操美女的视频在线观看| 亚洲欧美色中文字幕在线| 久久精品国产99精品国产亚洲性色 | 69精品国产乱码久久久| 丝袜在线中文字幕| 中国美女看黄片| 一二三四在线观看免费中文在| 高清黄色对白视频在线免费看| 欧美乱码精品一区二区三区| 国产成人精品在线电影| 亚洲国产欧美网| 久久久国产精品麻豆| 久久国产精品人妻蜜桃| 一本一本久久a久久精品综合妖精| 老司机福利观看| 日韩 欧美 亚洲 中文字幕| 亚洲综合色网址| 99re6热这里在线精品视频| 日韩欧美在线二视频 | 美女扒开内裤让男人捅视频| 亚洲国产看品久久| 伦理电影免费视频| 999久久久国产精品视频| 久久久国产成人免费| 中文字幕精品免费在线观看视频| 亚洲精品一二三| www.精华液| 国产蜜桃级精品一区二区三区 | 黄片小视频在线播放| 国产在线精品亚洲第一网站| 久久香蕉国产精品| 欧美乱码精品一区二区三区| 女人久久www免费人成看片| av片东京热男人的天堂| 亚洲少妇的诱惑av| 国产成人欧美在线观看 | 亚洲熟女精品中文字幕| 999久久久精品免费观看国产| 欧美亚洲 丝袜 人妻 在线| 少妇 在线观看| 最近最新免费中文字幕在线| 在线观看www视频免费| 在线观看免费日韩欧美大片| 黄色 视频免费看| 久99久视频精品免费| 丝瓜视频免费看黄片| 亚洲色图 男人天堂 中文字幕| 女同久久另类99精品国产91| 一区在线观看完整版| 老司机靠b影院| 别揉我奶头~嗯~啊~动态视频| 热re99久久精品国产66热6| 国产精品偷伦视频观看了| 黄网站色视频无遮挡免费观看| 18禁裸乳无遮挡免费网站照片 | 久久精品人人爽人人爽视色| 无限看片的www在线观看| 最近最新中文字幕大全免费视频| 久久久久国产精品人妻aⅴ院 | 欧美亚洲日本最大视频资源| 美女 人体艺术 gogo| 村上凉子中文字幕在线| 法律面前人人平等表现在哪些方面| 精品一品国产午夜福利视频| av一本久久久久| 两个人免费观看高清视频| 可以免费在线观看a视频的电影网站| 韩国精品一区二区三区| 精品久久蜜臀av无| 他把我摸到了高潮在线观看| 欧美最黄视频在线播放免费 | 久久人妻福利社区极品人妻图片| av不卡在线播放| 午夜亚洲福利在线播放| 国产精品1区2区在线观看. | 免费在线观看日本一区| 成人亚洲精品一区在线观看| 91大片在线观看| 日韩欧美三级三区| 丝袜美足系列| 69av精品久久久久久| 麻豆成人av在线观看| 国产av又大| 丝瓜视频免费看黄片| bbb黄色大片| 午夜福利视频在线观看免费| 男女高潮啪啪啪动态图| 日韩三级视频一区二区三区| 黑人操中国人逼视频| 国产精品一区二区在线观看99| 国产1区2区3区精品| 亚洲av电影在线进入| av不卡在线播放| 久久精品国产亚洲av高清一级| 色综合欧美亚洲国产小说| 亚洲精品久久成人aⅴ小说| 欧美大码av| 欧美激情极品国产一区二区三区| 91精品国产国语对白视频| 久久久国产精品麻豆| 国产精品一区二区在线观看99| 狠狠狠狠99中文字幕| 亚洲午夜理论影院| 99在线人妻在线中文字幕 | 老熟妇仑乱视频hdxx| 桃红色精品国产亚洲av| 九色亚洲精品在线播放| 亚洲欧美一区二区三区黑人| 好看av亚洲va欧美ⅴa在| 午夜福利乱码中文字幕| 久久精品国产99精品国产亚洲性色 | 新久久久久国产一级毛片| 夜夜夜夜夜久久久久| 久久亚洲精品不卡| 黄色a级毛片大全视频| 亚洲av成人av| 欧美乱色亚洲激情| 国产黄色免费在线视频| 亚洲专区国产一区二区| 中文字幕人妻熟女乱码| 国产又爽黄色视频| svipshipincom国产片| 国产欧美日韩一区二区三| 成人影院久久| 亚洲成av片中文字幕在线观看| 夜夜夜夜夜久久久久| 久久亚洲精品不卡| 久久青草综合色| 一本一本久久a久久精品综合妖精| 窝窝影院91人妻| 国产伦人伦偷精品视频| 热99国产精品久久久久久7| 90打野战视频偷拍视频| 成人精品一区二区免费| 国产精品九九99| 一级毛片高清免费大全| 久久久久久久精品吃奶| 精品国产超薄肉色丝袜足j| 精品无人区乱码1区二区| avwww免费| 怎么达到女性高潮| 黄频高清免费视频| 国产亚洲一区二区精品| 在线观看午夜福利视频| 91字幕亚洲| 亚洲avbb在线观看| 天堂动漫精品| 亚洲精品av麻豆狂野| 久久精品国产亚洲av高清一级| 精品少妇久久久久久888优播| 日韩熟女老妇一区二区性免费视频| 亚洲成人手机| 俄罗斯特黄特色一大片| 下体分泌物呈黄色| 欧美日韩视频精品一区| 久久人人97超碰香蕉20202| 高清av免费在线| 一本一本久久a久久精品综合妖精| 丁香六月欧美| 亚洲aⅴ乱码一区二区在线播放 | 精品久久久久久,| 91大片在线观看| 久久国产精品大桥未久av| 国产激情欧美一区二区| 久久久久久久午夜电影 | 在线观看www视频免费| 国产有黄有色有爽视频| 老司机深夜福利视频在线观看| 一级,二级,三级黄色视频| 亚洲国产欧美日韩在线播放| 99久久综合精品五月天人人| 人妻一区二区av| 精品国产国语对白av| av网站免费在线观看视频| 一级a爱视频在线免费观看| 人人妻人人澡人人看| 亚洲国产毛片av蜜桃av| 伦理电影免费视频| 国产免费现黄频在线看| 亚洲五月色婷婷综合| 性少妇av在线| 99热只有精品国产| 老司机深夜福利视频在线观看| 国产成人影院久久av| 精品人妻1区二区| 久久影院123| 精品久久久久久久毛片微露脸| 亚洲精品久久午夜乱码| 午夜福利乱码中文字幕| 日日摸夜夜添夜夜添小说| 久久久久久久久久久久大奶| 久久久久久人人人人人| 欧美日韩一级在线毛片| 欧美最黄视频在线播放免费 | 国产精品一区二区在线观看99| 欧美激情久久久久久爽电影 | 欧美午夜高清在线| 午夜91福利影院| av网站免费在线观看视频| 亚洲国产精品合色在线| 一级作爱视频免费观看| 日韩欧美一区视频在线观看| 亚洲中文日韩欧美视频| 1024视频免费在线观看| 日本黄色视频三级网站网址 | 亚洲一区中文字幕在线| 视频区图区小说| 精品人妻1区二区| 国产高清国产精品国产三级| 亚洲av日韩精品久久久久久密| 精品久久久精品久久久| 欧美激情高清一区二区三区| 精品国产亚洲在线| 777米奇影视久久| 国产又爽黄色视频| 国产成人免费无遮挡视频| 亚洲国产精品一区二区三区在线| 999精品在线视频| 国产精品免费大片| 在线视频色国产色| 窝窝影院91人妻| 1024视频免费在线观看| 村上凉子中文字幕在线| 看免费av毛片| 亚洲精品中文字幕一二三四区| 精品第一国产精品| 黑丝袜美女国产一区| 下体分泌物呈黄色| bbb黄色大片| 亚洲第一青青草原| 午夜福利影视在线免费观看| 99riav亚洲国产免费| 涩涩av久久男人的天堂| 自拍欧美九色日韩亚洲蝌蚪91| 日本精品一区二区三区蜜桃| 欧美不卡视频在线免费观看 | 欧美最黄视频在线播放免费 | 国产精华一区二区三区| 美女午夜性视频免费| 色综合婷婷激情| 中文欧美无线码| 午夜福利乱码中文字幕| 女人爽到高潮嗷嗷叫在线视频| 大香蕉久久成人网| tube8黄色片| 亚洲精品久久午夜乱码| 亚洲av日韩在线播放| 成人国产一区最新在线观看| 丰满迷人的少妇在线观看| 免费观看人在逋| 精品电影一区二区在线| 最新美女视频免费是黄的| 欧美日韩乱码在线| 国产欧美日韩一区二区三区在线| 国产激情久久老熟女| 亚洲九九香蕉| 飞空精品影院首页|