Monday, October 09, 2006

An end and a new beginning

This blog will bear no further entries. Its sole purpose was to blog from the IAU meeting and we had fun writing it. Thanks to Ulrike and Jens for contributing.

But this is not the end of the story: We just started a new, more persistent blog called "Apparent Brightness". It has all the texts from here and will soon get new ones. Check it out!

Monday, September 04, 2006

D. Koch: The Kepler Mission and Binary Stars (S 240 invited talk, Wed, Aug. 23)

The talk started with a detailed description of the Kepler mission concept. It is a photometric mission which will look for habitable planets (0.5 to 10 Earth masses). The instrument is a 1m Schmidt telescope with a very large field of view (FOV; a 42 CCD array larger than 100 square degrees - one would need six Palomar sky survey plates to cover the FOV). Kepler will observe in the Cygnus-Lyra region along the Orion arm of our Galaxy. It will achieve a differential photometric precision of 6.6 ppm for a 6.5 hour integration. A single FOV will be continuously observed for 4-6 years (except for less than 1 day every month). Simultaneous observations of more than 103000 main-sequence stars will be obtained (3000 "guest objects" can be proposed). The time resolution is 30 minutes for most stars, and one minute for a subset of 512 objects. Kepler will observe a single bandpass from 430 to 890 nm, where the cutoffs are set to avoid CaII H&K and fringing. The PSF FWHM will be about six arcsec.

The detection capability was also discussed - it depends on many different parameters.

The selected targets are late dwarf stars (F,G,K,M). A "Stellar Classification Program" is gathering new multi-band photometric data (SDSS griz filters + filter for Mg b lines). The "Kepler Project" is producing a catalog of stars in the target field.

Data processing will partly be performed on board (only pixels of interest are extracted). Ground processing will consist of cosmic ray, bias, smear and common-mode noise removal and the analysis of fluxes for threshold events. All light curves will be archived at STScI.

Other results from the data include results relevant to binary stars - follow-up radial velocity observations to get masses and to differentiate between planetary transits and grazing eclipsing binaries. 1000 to 1500 eclipsing binaries with high precision light curves are expected. Non-eclipsing binaries will be identified using astrometry (distance), effective temperature and luminosity. The data will also be useful for other astrophysical purposes (oscillations, etc.).

Community participation and access will be possible through the guest observer program. Scientists may propose to observe targets in the FOV not being observed as part of Kepler planet search. The objects can be brighter or fainter than the nominal dynamic range. The observation duration can be three months or longer.

The launch of Kepler is scheduled for October 2008.

More information is available on the Kepler website.

D. Pourbaix: Binaries in Large-Scale Surveys (S 240 invited talk, Wed, Aug. 23)

This talk gave a thorough overview on ground- and space-based surveys relevant to binary stars.

Starting with unresolved binaries, Wielen (1996) suggested two ways of unveiling unresolved binaries:
  • Variability-induced movers (VIM), for which the position of the photocenter moves back and forth along a line, and
  • Colour-induced displacement binaries (CID), for which one takes images through different filters and sees the displacement. This was illustrated by a plot for the five SDSS filters. The five positions of the photocenter will follow the path of the binaries (a line), ordered by wavelength (as opposed to random displacements for a single star). Thus, one can detect binaries with separations below the resolving power of the instrument.

This was followed by a list of eclipsing binary surveys:
  • Robotic Optical Transient Serach Experiment
  • Optical Gravitaional Lensing Experiment
  • All Sky Automated Survey
  • MACHO
  • Discovery Channel Survey
and others. All surveys are essentially automated.

Then followed a discussion of the usefulness of general surveys with respect to binary stars. From 2MASS, one can get positions and IR magnitudes to confirm binarity, and it serves as a source of candidates.

From SDSS, binaries can be detected in several ways:
  • Spectrum analysis (Balmer lines from hot stars and TiO from a cool companion, or narrow H emission lines superposed on the WD spectrum)
  • Outliers in the color-color diagram
  • Spectroscopic binaries (but there are only 10000 objects with two or more spectra, the maximum number of spectra is 13)
  • CID binaries
Results:
  • Spectrum analysis: 747 detached close binaries (Silvestri et al. 2005)
  • Color outliers: 863 WD+MD pairs (Smolcic et al. 2004)
  • SBs: 675 candidates (19 orbits, Pourbaix et al. 2005)
  • CID binaries: 542 candidates (Pourbaix et al. 2004)

Other useful surveys are
  • High proper motion stars (bulge): A group at CfA and others have obtained more than 300 spectroscopic orbits.
  • Open clusters: Mermilliod & Paunzen (2003)
  • By-products of Planet Quest (late-type stars): Nidever et al.

Potentially useful surveys are
  • Guide star catalog II (proper motions, BVI magnitudes, positions)
  • Palomar Quest
  • CFHT Legacy Survey
  • UKIRT (looking for WD+BD and subgiant+BD binaries)
  • Radial Velocity Experiment (RAVE)
  • Pan-STARRS

Space missions:
  • IUE has observed about 150 spectroscopic orbits (Stickland et al.).
  • FUSE took over in 1999.
  • Hipparcos (double and multiple star annexes, DMSA), the so-called "Hipparcos Poor Astronomer Sky Survey" (What can be done with existing Hipparcos data?). The percentage of binaries among variable stars with a large brightness variation increases with V-I. This is a CIM effect. There is correlation between the position of the photocenter and the brightness. HIP 88848 served as an example for "changing a mess into science" (Fekel et al. 2005, third body). Torres et al. (2006) discusses the eclipsing binary V1061 Cyg. A poster by Halbwachs & Pourbaix was also mentioned. Makarov et al. discuss the identification of binaries comparing Tycho-2 proper motions with Hipparcos ones.
  • Gaia will enable the detection of binaries and multiple stars by astrometry, spectroscopy and photometry (variability, outliers). There will be enough data to investigate the binary frequency over the HR diagram, across stellar populations, etc.

Conclusions:
  • Do not put all your eggs in the same pocket (do not survey only eclipsing binaries)
  • Someone's garbage can be science for others (e.g. stars in the SDSS)
  • Public archives are gold mines for immediate scientific results (enough data exist to give one paper for each of us) and training sets for future science
  • Do not push garbage recycling too much, beyond the specs (cf. the Hipparcos-planets controversy)

End of Convection posts

The previous post was the last one for the "Convection in Astrophysics" symposium. Note that some of the posters are available on the conference website as pdf files. A few more posts for the "Binary stars" symposium will follow.

Convection in Astrophysics, Session E, Invited Talk 1 (Wed, Aug. 23)

P.R. Wood: The Relation of Convection to Pulsation and Mass Loss in Red Giants

Concerning pulsations in red giants, the good news is that there have been one or two attempts of global modelling of convection in red giants. One is Woodward et al., where the star pulsates and shows a dipolar structure. The other is Freytag et al. (1999).

The bad news is that studies of pulsation in red giants use convection theories that belong in the past. They use spherically symmetric models and mixing length theory.

This introduction was followed by a discussion of the relation of convection and pulsational stability. In the red giant envelope, almost 100% of the energy flux is transported by convection. The driving of pulsation depends on the energy transport into and out of mass zones, hence it depends on convection. It follows that one has to know how convection varies throughout the pulsational cycle. The work integral shows that most work is done in the convective region.

The linear pulsation models by Xiong et al. (1998) show that turbulent pressure and turbulent viscosity introduce damping.

Pulsation in the presence of turbulent convection: Turbulent eddies transport momentum in a medium with a pulsation velocity gradient. This gives rise to turbulent viscosity.Wood mentioned that this can be seen as the macroscopic analogy of molecular viscosity, but in the discussion after the talk, Stein noted that turbulent viscosity is very different from molecular viscosity.

Nonlinear pulsation models show that work dissipated by turbulent viscosity is the major damping factor. How good are these models? Not very good. In general, pulsation models (linear and nonlinear) with current convective treatments are too unstable. Nonlinear models give large luminosity spikes that are not observed - the convection is too efficient (Olivier & Wood 2005).

On the observational side, microlensing surveys have given enough data that one can begin to do asteroseismology on red giants (MACHO, OGLE). Sample MACHO light curves were shown. They are often semiregular, multimode, and show a large amplitude range.

Next, the period-luminosity diagram (Wood et al. 1999) was discussed. Can one see different modes of oscillation? The periods and period ratios fit only approximately, which indicates that the envelope structure is inadequate. What is the cause of closely-spaced periods? Are they stochastic excitation by convection, or structural changes resulting from convective irregularities, or nonradial modes?

The closely-spaced periods were also discussed using O-C diagrams for red giants (O ... date of observed maximum, C ... predicted date). A straight line in these diagrams means that the period is constant. It was shown that red giant periods vary abruptly by a few percent.

Another way to look at closely-spaced periods are the so-called Peterson diagrams, which show period ratios vs. log(period). In these diagrams, the models do not fit well the observed ratios.

Period-luminosity relations by Lebzelter and Wood (2005) were also discussed. The importance of mass loss was stressed. But - what causes the mass loss?

For an ideal study of pulsations in red giants, one needs to follow the pulsation for 50 to 100 years.

As a final curiosity, red giants with long secondary periods were mentioned. The light amplitudes are up to 0.8 mag, the velocity amplitudes only a few km/s. Are they associated with the chromosphere?

R.F. Stein: Applications of Convection Simulations to Oscillation Excitation and Local Helioseismology (Review, Wed, Aug. 23)

This was the review talk of session E in the Convection Symposium: "Oscillations, mass loss, and convection".

One application of numerical simulations is to understand the excitation process for p-mode oscillations. The excitation equation contains an integral, in which the terms for pressure fluctuations and mode compression are multiplied (don't take mode compression out of the integral!). Excitation decreases at low and high frequencies, which results in typical oscillation periods, e.g. 5 minutes for the Sun.

In the power spectrum for the turbulence, spatial and temporal factors cannot be separated. Therefore, a generalized Lorentzian is fit (where the width and power depend on wavenumber). This is another improvement needed for analytical work.

In a discussion on turbulent (Pt) and gas pressure (Pg), an analysis of local heating and cooling shows that excitation is due to entropy fluctuations. Pt and non-adiabatic Pg work comparably near the surface, Pt extends deeper. p-mode driving is primarily by turbulent pressure. There is some cancellation between Pt and non-adiabatic Pg.

Next, some work by Günther Houdek was shown, and his analytical model compared to simulations by Stein & Nordlund (2001). Excitation rates for p modes for the Sun, Procyon, and alf Cen A where shown - models and simulations agree.

Another application is helioseismology, i.e. to test and refine local helioseismic models. One needs a large size and a long time sequence (so far, the simulation is 48 Mm wide by 20 Mm deep, the intention is to double the size next month). As a bonus, the simulations lead to an understanding of supergranulation. The simulations run for 48 hours (1 turnover time), include f-plane rotation (surface shear layer), but no magnetic field yet. The resolution is 100 km horizontal, 12-70 km vertical.

After a presentation of the numerical method, some results were shown. The mean atmosphere is highly stratified. A slice through simulations for the velocity in the vertical plane shows that downflows are being swept aside by the diverging upflows. Thus, the downflows become larger and make it to deeper layers. Some of the downflows are not swept aside and disperse. Displaying the vertical velocity on horizontal planes shows the continuous change of scales from granules to supergranules. Upflows at the surface come from a small area at the bottom. Downflows at the surface converge to supergranule boundaries.

The horizontal and vertical velocities from simulations were then compared to MDI observations. At larger scales, the oscillatory component dominates, at smaller scales, the convective component dominates. Simulations were also compared to MDI observations in a k-omega diagram, a time-distance diagram, and in a diagram of f-mode travel times vs. simulated flow fields (divergence and horizontal). North-going and south-going travel time differences from MDI were compared to simulated velocities.

Note that the simulated data is available from the MDI data base: Full data sets (~200GB per hour solar time) and slices of Vxyz and T at selected depths (~2 GB per hour solar time).

After a movie of temperature and vertical velocity by Viggo Hansteen, showing non-linear wave propagation in 2D, some answers to the question "Why are linear calculations useful?" were given. They complement non-linear calculations, they are much faster than non-linear calculations, one can explore the parameter space, and one can isolate physical effects.

At the end of his talk, Stein mentioned that he is looking for a solar physics post-doc.

Convection in Astrophysics, Session D, Oral Contributions (Wed, Aug. 23)

H. Shibahashi: The DB gap of white dwarfs and semiconvection

This talk was given by Mike Thompson on behalf of Shibahashi.

First, a general overview of the classification and properties of white dwarfs (WDs) was given. The classification of WDs reflects their surface composition, but not their temperature. The spectra of DA WDs show only hydrogen lines (their atmospheres consist of pure H), those make up about 80% of all WDs. DB WDs show only He I lines (pure He). There are also DO (showing He II lines), DC, and other classes. DAs are found from hottest to coolest temperatures, DOs for Teff > 45000 K, DBs for Teff < 30000 K. No He-rich WDs are found between 45000 and 30000 K, this constitutes the DB gap.

A plot of the number of DA WDs vs. Teff from McCook & Sion (1987) catalog was shown. At about 11000 K, one can see a step in the ratio of DA/non-DA WDs.

Why does the DB gap exist? The simple picture of parallel sequences of H and He-rich objects doesn't work. There is a theory of spectral evolution (by Fontaine and Wesemael), which proposes a common origin for WDs (PNN). At 30000 K, He ionization creates convection, and He is mixed to the surface. In the spectral evolution model, a wide range of H layers is expected (10^-4 to 10^-13 solar masses).

In observations, there has been much progress recently, mainly by large surveys.

The new working hypothesis is that all WDs have some H. Only about 10^-15 (solar masses?) is needed to produce an optically thick H layer at the surface. For Teff > 45000 K, the He II/III zone creates turbulence which mixes H with He and leads to He stars. For gap stars, the He II/III zone is too deep to mix up H, and gravitational settling leads to the He envelope. At 30000 K, He I/II creates turbulence which mixes H with He and leads to He stars.

One can make a prediction of semiconvection based on this scenario. At 30000 K, the He ionization zone turns into a convectively stable layer, which is nontheless superadiabatic. This is a plane-parallel, gravitationally stratified layer of fluid in hydrostatic and radiative equilibrium, with a steep chemical gradient. The equations for this situation were shown, employing the Boussinesq approximation. The physical cause of the overstability is that radiative heat exchange brings about an assymmetry in the oscillary motion, and this leads to overshooting.

There are some open issues.

In summary, there are two groups of WDs, and a DB gap. Convective mixing and/or chemical separation might be responsible for the gap. A new type of WD variables is predicted near the red edge of the DB gap.



M. Spite: Extra-mixing in Extremely Metal-Poor red giants

This talk presented some results of the "First Stars" project. The aim of this project is an analysis of the chemical composition of the galactic matter in the early times. This works only if there is no mixing. 18 dwarf and 33 giant stars (not C-rich) with Fe/H <= -3.0 were selected, and high resolution spectra (R ~ 45000) obtained.

As an example, spectra of the Mg b lines and the NH band were shown. An LTE abundance analysis with OSMARCS models was performed.

For the discussion of Mg in turnoff stars (dwarfs and giants), a plot of [Mg/Fe] vs. [Fe/H] was shown. One could see that the abundance ratio is constant, with a scatter of about 0.1 dex. This is more or less the same for all elements. Exceptions are C and N in giant stars - [C/Fe] shows an extremely large scatter, and N is even worse.

What is the reason for the large scatter for C and N in giants? In a primordial scenario, it would mean that there was a real scatter in the ISM of the early Galaxy. In the in situ scenario, the original C and N abundances in the atmospheres of the giant stars have been altered.

Indicators of mixing in giants are:
  1. An anticorrelation of C and N: In a plot of [N/Fe] vs. [C/Fe], we see two groups of stars: mixed stars with [N/Fe] of about 1 and [C/Fe] <>
  2. A very low abundance of Li in mixed stars is expected. The Li abundance decreases as carbon 13 increases.
  3. The phenomenon must appear at a specific location in the HR diagram.

Next, comparison to the results of Gratton et al. (2000) was made. The mean metallicity of the Gratton et al. stars is -1.5 dex, whereas the mean metallicity of the "first stars" is -3.1 dex. Mixing appears at a higher luminosity for the "first stars", but in both cases at the location of the bump.

A discussion of the abundances of Na and Al in these stars showed that some mixed stars are Na or Al rich (none of the unmixed stars). This could be due to deep mixing. Maybe some mixed stars are AGB stars (no effect is seen in oxygen, maybe the effect is too small).

As soon as an extremely metal poor star reaches the luminosity of the bump, its atmosphere is mixed with the H burning layer and the abundances of the light elements are altered.



P.P. Eggleton: Two Instances of Convection and Mixing in Red Giant Interiors

The actual title of the talk was "Formation and destruction of 3He in low-mass stars - Big Bang nucleosynthesis rescued".

The discovery of a very important mixing process taking place on the AGB was presented. The mechanism is a Rayleigh-Taylor instability, driven by an unusual nuclear reaction of 3He. He is produced in the interior on the main sequence, then mixed into the convection zone.

The reaction equation is: 3He + 3He -> 4He + p + p

As an example, the evolution of an 0.8 solar masses population II star was shown in the HR diagram. The element distribution in the star at the end of the main sequence is as follows. There is a small exhausted core, and a lot of 3He is produced and mixed into the surface layers. There is a maximum in molecular weight at the bottom of the H burning shell at 3 million years after the turnoff (?). This leads to a little 3He burning shell.

For the simulations, the 3D hydrodynamics code "Djehuti" was used, which is described in Dearborn, Lattanzio and Eggleton (2006, ApJ 639, 405), a paper on the He flash. It implements explicit hydrodynamics, is run on 351 processors, and the timestep is Courant limited.

The Rayleigh-Taylor instability does not remove the molecular weight gradient that drives it. It is constantly replenished. Mixing advects fresh 3He in at the same rate as it advects products out.

The results were presented as a movie of the global stellar surface.

Saturday, August 26, 2006

D. Arnett: Progress in 3D simulations of Stellar Convection (Review, Wed, Aug. 23)

This was one of the review talks of session D in the Convection Symposium: "Stellar evolution, nucleosynthesis, and convective mixing".

3D simulations with modest resources were presented, where oxygen burning allows thermal relaxation for quasi-adiabatic convection. The computational domain is a subsection of the star. Multi-fluid, realistic physics allows astronomical connections to be made. A careful treatment of boundary conditions and initial models allows long times to be simulated. Finally, extensive graphical and theoretical analysis was used to extract physics. Much of the work was done by Casey A. Meakin (PhD student).

A comparison to 2D simulations shows that 2D simulations overestimate velocities (angular momentum constraint) and incorrectly give inefficient turbulent mixing. Movies of the core showed that in 2D one gets much higher velocities and a less homogeneous core.

After a discussion of 2D burning of C, Ne, O (flames develop because of entrainment of nuclear fuel), the buoyancy frequency, and density fluctuations (occur mostly at interfaces), a comparison of oxygen in 2D and 3D was presented. In 3D, the oxygen abundance is much smoother, better behaved, and there are less fluctuations.

Next, it was pointed out that the solar convection zone can be seen in a plot of superadiabaticity vs. radius as a tiny spike at the surface, and that "Stein and Nordlund country" includes only 3% of the Sun.

Waves are generated at convective interfaces. This can be seen in simulations of turbulent, compressible convection: Velocities do not go to zero at the boundaries (as when using MLT). Those are not convection, but waves (g and p modes).

The convective core grows by entrainment. In a graphic of abundance gradients (e.g. oxygen) as a function of radius and time one could see the convective core growing with time. Patrick Young is another collaborator and has incorporated a first version of the entrainment in the TYCHO code. Previously successful results for wide, double-line eclipsing variables (which used a simpler model) are reproduced.

Implications for the standard solar model:
  • It has a problem - it does helioseismology too well.
  • It is static - adding even small dynamic effects may spoil it.
  • Dynamic effects on opacity diminish the helioseismologic discrepancies for the new abundances and increase them for the old composition.
  • The increased diffusion and the He surface abundance are at odds - rotational stirring may be required.
  • John Bahcall would have loved this new challenge.

Convection in Astrophysics, Session C, Oral Contributions (Wed, Aug. 23)

H.-G. Ludwig: Prospects of Using Simulations to Study the Photospheres of Brown Dwarfs

This talk presented ideas and problems when it comes to simulation of brown dwarf photospheres. A key aspect is the formation and transportation of dust grains. Up to now, all 3D simulations were extrapolations from simulations of hotter stars (M dwarfs), which were dust free. An extension to hot Jupiters (EGPs) is also forseen, which are quite similar to brown dwarfs, hydrodynamically speaking.

Simulating convection is relevant for the energy transport, the thermal structure of the convective layers and mixing of material. Maybe it will also be important for acoustic activity (mechanical heating) and local dynamo action (magnetic activity), we don't know yet.

In terms of radiation, RHD models will not add substantial information. Modelling radiation is of course important for energy transport and the thermal structure of the radiative layers, while radiation pressure is not important.

Another important ingredient of the simulations will be the formation of liquid and solid condensates ("dust"). This is an opacity source, for which knowing the albedo, the atmospheric chemical composition and the spatial distribution will be necessary.

Further modelling aspects include rotation (advective transport of dust) and external external irradiation by the host star for EGPs ("day/night"), causing a global circulation pattern.

Up to now, toy models without dust have been calculated to derive time scales. The results were presented in a graph showing time scales vs. geometrical height. Convection time scales are 10^2 to 10^3 s, the condensation time scale is about 10^2 s for 100 micron grains. This can be used to derive a grain size limit. The dynamic ranges of the simulation are 10^4 to 10^5 in time and 10^2 to 10^3 per dimension in space. The ratio of radius to convection cell size is about 10^4 (10^3 for EGPs). This shows that the simulations will be unable to encompass all spatial ranges (from global to convection) and one will have to separate global and local models.

References for global EGP models are Cho et al. and Burkert et al. (2005).

As an example for local models, a Teff = 1800 K RHD test model without dust by Ludwig, Allard and Hauschildt was shown, where the convective overshooting turns out to be about 0.35 pressure scale heights. The large range in velocities puts high demands on numerics. Rotation and thermal forcing cause an interplay between global and local models.

In conclusion, we are in the position to obtain realistic models of brown dwarfs including radiation and dust formation. But there will be no unified models in the near future.

The challenges for numerics are the substantial scale ranges (dust grain vs. convective velocities), developing an efficient procedure to treat scattering in the time-dependent multi-D case, the interaction between local and global transport processes, the dust cloud distribution and the local transport of momentum (turbulent velocity).



G. Wuchterl: Convection during the formation of gaseous giants and stars

Classical pre-main-sequence evolution starts somewhere in the middle of nowhere in the HR diagram (e.g. at the so-called "birthline"). The only other way is to calculate the full protostellar collapse, which involves challenging physics. To demonstrate this, eight equations for self-gravitating, convective, radiating fluids were shown (see Wuchterl and Tscharnuter 2003, A&A 398, 1081).

Wuchterl and Feuchtinger (1998) is on the supercritical protostellar accretion shock with convection. This requires very high resolution (order of 10^8) and one has to go to very high Courant numbers (10^12).

Improving and testing convection involves a physically plausible time dependent theory. A modified Kuhfuss theory is employed.

The calculations start with an equilibrium Bonnor-Ebert sphere, but the stars do not arrive on the Hayashi line (they are hotter and more luminous after one Myr).

In the HRD, star formation calculations are best displayed with isopleths (lines of constant mass), since the mass is not constant during evolution (accretion!).

Results of a solar mass collapse with convection show that after 1 million years, the cloud memory is lost. The differences to the static picture are that deuterium is burnt (so the birthline seems not to be a useful concept) and the core is radiative.

Two cases of extreme planet formation were presented:
  1. GQ Lupi b (ESO VLT NACO observations from June 2004). This is a nearby young star with a faint companion, with a projected separation of 0.7 arcsec (100 AU).
  2. HD 149026 b - a transiting Saturn mass planet with a 70 Earth masses core.
Models give predictions which are in accordance with the observations.

Friday, August 25, 2006

Good bye Prague

The previous entry was the last one for this week. Unfortunately I did not get past the second day, but at least that one is complete. I have notes for many more talks and will add them later to this blog.

Today, I already attended the very nice concluding talk for the Convection Symposium given by J-P Zahn and will now go to the summary talks of the Binary Stars Symposium by C. Scarfe and V. Trimble.

H. Hensberge: Modern Techniques for the Analysis of Spectroscopic Binaries (invited talk, Tue, Aug. 22)

This talk within Symposium 240 was about how to extract information from spectroscopic binary (SB) spectra by reconstruction and analysis of the component spectra.

At the beginning, the importance of the line broadening function was discussed. For reconstruction, it is assumed that the observed spectrum is composed of a sum of weighted intrinsic component spectra. The intrinsic component spectra are time-independent (the shape of the spectral lines does not depend on orbital phase), but the weights may be time dependent (light and line strength variability). The reconstruction technique is based on velocity differences between the components and their time-variations. In order to apply this technique successfully, one should aim at observations homogeneously distributed in velocity and not concentrated at extreme line separation.

Historically, SB spectra have been analysed by the following techniques:
  • Cool giant + hotter, less evolved star (very different stars): subtract template for cool giant (Griffin & Griffin 1986)
  • Tomographic separation (Bagnulo & Gies 1991)
  • Disentangling in velocity space (Simon & Sturm 1994)
  • Disentangling in Fourier space (Hadrava 1995)
  • Iteratively correlate - shift - co-add (Marchenko 1998, Gonzalez & Levato 2006)
The application domain of the reconstruction ranges from being a detection tool to detailed analysis of the component spectra. It can be applied over a large range of S/N, from UV to IR, for single lines to large wavelength ranges, and for spectral types from O to G.

As an example, the sharp-lined F8 + close binary (F8 + late-G) system RV Crateris was presented (details in ESO conference proceedings by Hensberge et al.). The disentangling technique revealed 3 components.

The use of the component spectra in astrophysics range from determining fundamental stellar parameters, testing evolutionary models, the spectroscopic detection of eclipses to spectroscopic determination of light ratios. For example, the chemical composition of DG Leo Ab is presented in Fremat, Lampens & Hensberge 2005, MNRAS 356, 545).

One can work in Fourier or velocity space. The options for velocity space are to mask blemishes or unwanted components in the input spectra, weight spectral bins and do an error analysis of the reconstructed components. The options for Fourier space are to analyse the origin and shape of spurious components in the reconstructed component spectra and to weight the Fourier components.

A problem with the technique is that the solutions may not be unique. As a way out one can use external information (photometry, models).

Prospects are:
  • Retrieving astrophysical information by modelling the asymmetrical broadening function
  • Extending the application domain of the spectral disentangling techniques in order to include better physics

Posters

I will not comment on any particular of the hundreds of posters presented during this week (of which the majority are presented at the Binary Stars Symposium). I only took a brief walk through the poster rooms of the Convection and Binary Stars Symposia and took some of the few A4 copies that were left. In general, one could say that the posters got less attention than deserved, at least in the Convection symposium, since there was not a single session dedicated to posters and the poster rooms were located far away from the coffee tables. But pdf files of the posters will be made available on the conference web page. In the Binary Stars Symposium there was one session with poster highlights each day, allowing the poster authors to make short oral presentations. Many of the posters (not presented orally) were about one particular binary system.

I noticed that for my own poster I had supplied too few A4 copies, they were all gone after a short time. You can download an A4 pdf file of my poster, presented at both the Convection and Binary Stars Symposia here.

J.D. Landstreet: Observing Atmospheric Convection in Stars (Review, Tue, Aug. 22)

This review talk opened Session B of the Convection Symposium: "Observational Probes of Convection". It gave an overview of the classical ways to detect convection in observations.

Convection reaches the photosphere in most stars of Teff < 10000 K, perhaps also in hotter stars. Convection cells are directly visible in the Sun as granulation. In stars, convection can be detected indirectly as velocity fields (microturbulence, macroturbulence, bisector curvature, etc.).

Microturbulence is excess line broadening over thermal broadening, required to fit weak and strong lines. The microturbulence parameter characterizes a velocity field. For example, for Sirius, fitting weak and strong lines requires different abundances. Add a Gaussian velocity field of 2.2 km/s and all lines are fit with one abundance. Microturbulence is required for most stars with Teff < 10000 K and corresponds to convective instability, at least in cooler stars It is detectable even in broad-lined stars. Since it is only one number, one can characterize the variation of the amplitude of velocity fields across the HR diagram, but no further information can be derived.

Spectral line profiles show asymmetries due to asymmetric flows. The distortion should depend on where in the granule the line is formed. Different areal coverage of rising and falling plumes cause asymmetries and shifts in the line profiles.

Macroturbulence is required to model line profiles of most main sequence stars. Line profiles of giants and supergiants are more "pointed", with broad shallow wings. Again, one parameter (zeta) represents a characteristic velocity. Zeta varies systematically across the cool part of the HR diagram (F0 to K5). For hotter stars, rotation masks macroturbulent velocity fields. Macroturbulence drops to zero above A0V. Among hotter stars (Teff > 10000), the situation is confusing (see Lyubimkov et al. 2004 and Przybilla et al. 2006).

Bisector curvature (line profile asymmetry) is another way to detect convection. Gray and Nagel (1989) showed that bisectors are reversed for cool (K) vs. hotter stars (F), with the reversion taking place at about G0. A stars have reversed bisectors, late B stars have no curvature at all.

The use of 3D models is limited - if one disagrees with observation, testing changes is time consuming. On the other hand, MLT and other convection models can be used for testing.

Conclusions:
  • Stellar atmosphere velocity fields are clearly detectable in the spectrum.
  • The behaviour over the HR diagram is varied, the largest velocities are found in supergiants
  • Modelling is making progress at connecting convection theory with observations.

Convection in Astrophysics, Session B, Oral Contributions (Tue, Aug. 22)

H. Kjeldsen: What Can We Learn About Convection From Asteroseismology?

This talk presented work done together with Tim Bedding and focussed on solar type stars. First, schematic movies of non-radial oscillation modes were shown and the classification of modes by degrees l and radial order n reviewed. The construction of power spectra and echelle diagrams was described. From the echelle diagram, the density and age of a star can be inferred.

For the solar like stars alpha Cen A and B, data have been obtained with UCLES at AAT and UVES at VLT. The precision is 50-70 cm/s with UVES for alf Cen A, i.e. almost as high as with a solar satellite. The echelle diagrams show l = 2, 0, 3, 1 modes, with a much smaller oscillation amplitude for alf Cen B. alf Cen B has a much higher density than alf Cen A and Sun. alf Cen A models do not fit well. alf Cen B models fit better, but small deviations remain.

beta Hydri is a G2 subgiant (an "old Sun"). For this star, CORALIE, UCLES and HARPS data have been obtained (one can "see the star oscillating on screen"). The echelle diagram shows normal l = 2, 0 modes and a crazy l = 1 mode, which could actually be mixed modes (due to so-called avoided crossing). Models by Di Mauro et al. (2003) seem to show indication for avoided crossing. Mixed modes are extremly sensitive to convection in the cores of these stars.

What we can learn about convection from asteroseismology: Mixed modes tell us about core convection (bet Hyrdi), structure the in echelle diagram gives information about the outer convection zone, and the surface can be studied with p-mode lifetimes and the "noise" background.



G. Cauzzi: Solar High Resolution Spectral Observations Compared with Numerical Simulations

In this talk, spatially resolved spectral observations were presented, which can be used to examine the models presented before. A movie of 80 arcsec of the solar surface at 7200Å during 1 hr with a time step of 20 s was shown. The diffraction limit is 0.24 arcsec = 160 km.

The observations are imaging spectroscopy (rapidly tunable narrow band filters, mostly IR with high transparency of 15-20%) using short exposure times (20-50 ms) and a large 2D field of view 60 arcsec squared. Narrow passbands (R > 200000) and sequential spectral sampling (10-20 points per line) is used. The instrument is called IBIS and the system works with adaptive optics. As an example, quiet Sun data from June 2004 were shown - the Fe I line at 709 nm at 16 spectral positions with R = 250000.

Simulations and LTE spectral synthesis for 6 snapshots were provided by M. Asplund, with a horizontal extent of 6x6 Mm (120 km step), matching the observations, and 1 Mm vertical extent (+800 to -200 km, step 10 km). The synthetic data were smeared with the telescope and atmospheric PSF and the instrumental resolution.

The spatial power spectra agree between observations and simulations. The line vs. continuum intensities at +-60 mÅ agree. The intensity distribution with wavelength and the velocity distribution were found to fit as well.

Reverse granulation is seen in the simulation and the observations. When plotting equivalent widths vs. continuum intensity, the simulation lies a little bit higher. Average profiles are fit by the simulations when using the higher Fe abundance (7.67).

Conclusions:
  • Excellent agreement of high resolution observations with simulations
  • Validates both realism of simulations and reliability of instrument
  • Useful tool additional to abundance analysis

Convection in Astrophysics, Session B, Invited Talks (Tue, Aug. 22)

A.G. Kosovichev: Helioseismic inferences on subsurface solar convection

This talk was given at quite high speed, so it was difficult to take notes. Here is an attempt of a summary.

The idea of helioseismology is to measure travel times of resonant frequencies. Global helioseismology estimates frequencies of normal modes from oscillation power spectra. Time-distance helioseismology measures travel times of acoustic or surface gravity waves.

The depth of the solar convection zone can be measured with helioseismology (about 0.29 solar radii). It is more shallow in polar regions.

Differential rotation produces a "tachocline" - a rotational shear layer mostly below convection zone (see Kosovichev 1996, ApJ 469, L61 - analysis of BBSO data).

There are differences between the standard solar model and seismic models, which result in a difference in the solar radius of about 300 km. This could be caused by convective overshoot at the top of the convection zone.

New local helioseismology provides maps of the solar surface (synoptic maps of subsurface flows). Supergranulation can be observed by time-distance observations, vertical flows are difficult to measure. The observations show that the supergranulation pattern moves faster than the plasma.

Magnetoconvection in sunspots as well as solar cycle variations because of meridional circulations were also discussed.

Conclusions:
  • Local and global helioseismology provide important constraints for convection in the Sun.
  • Large scale numerical simulations are needed to interpret the data.
  • Helioseismology can be used to verify simulations.



M. Asplund: Convection and the solar elemental abundances - does the Sun have a sub-solar metallicity?

Solar system abundances can be measured in meteorites (very high accuracy, but depleted in volatile elements) or the solar atmosphere (modelling dependent, very little depletion). The solar atmosphere is dynamic and three dimensional (3D) due to convection.

1D solar atmosphere models make various simplifying assumptions, but have the advantage that radiative transfer can be treated in detail. 3D solar atmosphere models are more realistic but have simplified radiative transfer. They are essentially parameter free.

The temperature structure in 3D is very different from 1D - it is very steep in upflows and there are significant inhomogeneities. The mean 3D structure is similar to 1D MARCS models but cooler than the semi-empirical Holweger-Müller model.

3D line profiles differ from 1D profiles. The spectrum formation is highly non-local and non-linear and strongly biased towards upflows. Profiles of an observed solar Fe line were shown and compared with 1D and averaged 3D line profiles. The 3D profile agrees without the need for micro- and macroturbulence. Line asymmetries and shifts are very well reproduced.

Solar C, N, O abundances have been derived using a 3D solar atmosphere model, non-LTE line formation when necessary, and atomic and molecular lines with improved data. Details are in Asplund et al. (2000-2006).

Oxygen diagnostics: In 1D, atomic and molecular lines give discordant results (log O = 8.6-8.9). In 3D, there is excellent agreement. The [O I] 630 nm line is blended with Ni, which gives a correction of -0.13 dex (not noticed in 1D), actually larger than the difference 3D minus 1D (-0.08 dex). The O I 777 nm feature has been calculated with full 3D non-LTE line formation, and the main difference is non-LTE minus LTE (-0.2 dex). This is most significant at the limb. OH vibration-rotation lines in the infrared have been calculated with 3D LTE line formation. Molecular lines are extremely sensitive to temperature.

Carbon diagnostics: Again, 1D there are discordant results (8.4-8.7), while in 3D there is excellent agreement. CO vibration-rotation lines as well as weak CO lines give an abundance of 8.39+-0.05. There are still problems with the strongest CO lines. C and O isotopic ratios can be derived, which agree with terrestial values.

There are also preliminary 3D results for all elements from Li to Ni.

The implications are a siginificantly lower solar metal mass fraction Z - from 0.0194 (Anders & Grevesse 1989) to 0.0122 (Asplund et al. 2005). This alters the cosmic yardstick, and makes the Sun normal compared with its surroundings (e.g. OB stars). The problem is that solar interior models with the new abundances are in conflict with helioseismology (there is no solution yet).

Conclusions:
  • 3D + non-LTE + new atomic/molecular data give significantly revised solar CNO abundances
  • The new abundances solve a lot of problems but are terrible for solar modelling
  • Coming soon: New 3D models with improved radiative transfer treatment

Convection in Astrophysics, Session A, Oral Contributions (Tue, Aug. 22)

S. Wedemeyer-Böhm: Dynamic Models of the Sun from the Convection Zone to the Chromosphere

This talk presented recent results from work with the CO5BOLD code (the code presented in the talk by Steffen). Classical studies of the solar chromosphere encounter the problem that UV and CO diagnostics give different results for the temperature as a function of depth.

The CO5BOLD code was used (including recent upgrades such as time-dependent chemistry) to simulate a box extending from -1400 to +1400 km vertically and 5000 km horizontally (where the photosphere is located at 0 km).

The resulting thermal structure shows that shockwave action dominates the upper layers. A fast evolving pattern of hot shock fronts is produced, with hot and cool temperatures next to each other. Thus, a temperature rise in the chromosphere can be "faked" by appropriate weighting, meaning e.g. that some of the diagnostics (spectral lines) might have a preference to form in hot regions.

The magnetic field was also studied, and the magnetic field strength pattern was found to evolve slowly in the lower layers (-1200 km, convection zone), fast in the upper layers (+1200, chromosphere) and with intermediate speed in the photosphere.

Further, the hydrogen ionisation was studied dynamically, and a different behaviour found depending on the equilibrium assumptions: In LTE, there are large gradients between high and low ionisation degrees, whereas with a non-equilibrium treatment there is much less variation of ionisation.

Finally, the results of a CO simulation where shown. In the upper layers (> 0 km), the relative abundance of CO is quite high, implying that a large fraction of carbon is bound in CO in chromosphere.

Future plans include more realistic models of the chromosphere, with time-dependent ionisation, non-LTE radiative transfer, larger models including a magnetic network, as well as detailed comparisons with obervations, e.g. ALMA.



F. Rincon: Anisotropy, Inhomogeneity And Inertial Range Scalings In Turbulent Convection

This talk was about inertial-range scaling laws in convection and the spectrum of turbulence in the solar photosphere.

It began with a reminder that in the Sun, turbulence is observed from above, which is a different perspective than in the laboratory.

Challenges of calculating turbulence in the Sun are that there is a boundary, which implies inhomogeneity, gravity is present as a force but also causes anisotropy, and there are plumes (/cells/eddies). So, the question was raised: what are the spectral scaling laws that we can observe?

Isotropic theories of turbulent convection have been around since 1941, when Kolmogorov derived an energy distribution E(k) proportional to k to the -5/3. In 1959, Bolgiano and Obhukov presented a more complex theory, including forcing, anisotropy and inhomogeneity. The Kolmogorov and the more complicated equations were now presented in the talk, but it is of course impossible to reproduce them here.

One point is that they involve an important length scale, the so-called Bolgiano length (LB), which is independent of the Nusselt, Rayleigh and Prandtl numbers. Unfortunately I do not recall how it is defined and what its meaning is, only that it defines a so-called "injection range": 1000 km < LB < 10000 km, which is confirmed by spectral budgets in polytropic convection.

The main conclusions were that photospheric turbulence is anisotropic and inhomogeneous and the anisotropy and forcing happens at an observable scale. Applications of this theory are presented in Yousef, Rincon and Shekochihin (2006). For further reading see Rincon (2006, J. Fluid Mech. 563, 43) and Rincon et al. (2005, A&A 430, L57).

Thursday, August 24, 2006

Prague pub tips

Prague pub tips

It turned out that the notes I am taking during the talks are not coherent enough and too detailed to be published directly. It takes a lot of editing and that takes time which is hard to find. That's the reason why post dates lag behind talk dates. I will try to post a few more summaries before the end of the conference and some at a later point.

In the meantime, here are two recommendations for pubs in Prague where I have been and where you can go for lunch or dinner:

At U Stare Posty (= Old Post, Opletalova 17), you get good food at a reasonable price with a refreshing glass of Staropramen. You can sit outside in a nice backyard.

A more expensive tourist place is U Fleku (Kremencova 11) where they serve their own dark beer (13 degrees) and play accordion music.

Convection in Astrophysics, Session A, Invited Talks (Tue, Aug. 22)

M. Steffen: Radiative hydrodynamics (RHD) models of stellar convection

Steffen presented models of stellar surface convection which implement 3D hydrodynamics, thermodynamics (equation of state) and 3D radiative transfer (including realistic opacities). The code used to calculate the numerical solution is CO5BOLD, described in detail in its on-line User Manual. Some important points are that no diffusion or Eddington approximations are made in the radiative transfer, the opacities are based on ATLAS or MARCS ODFs using the multi-group opproach (5 bins). The calculations are done in strict LTE.

A Box-in-a-star setup was used (cartesian simulation box with periodic side boundaries), which covers a tiny fraction of the solar convection zone. A model is characterized by Teff, logg, and abundances with no additional free parameters and is intended to be used for direct comparison with real stars.

An example for such a comparison was presented, using observations from the Swedish 1-meter Solar Telescope - a 15"x15" field at a wavelength of 4364Å compared with a simulation using 400x400x165 cells (see Steffen 2004). Another example was a favorable comparison of simulations to Dutch Open Telescope observations (Leenaarts & Wedemeyer 2005 A&A 431, 687).

A presentation of the flow topology of deeper layers in the numerical simulation (not accessible to observation) showed that the granulation pattern is a very shallow surface phenomenon.

Next came a strictly differential comparison of 3D RHD simulations with 1D Mixing Length Theory calculations for the Sun: The vertical velocity in 1D is too small throughout the simulated region, and the temperature structure cannot be matched in 1D with a single mixing length parameter.

The next part was on surface convection in A type stars, which possess two distinct convection zones (H/HeI + HeII). The simulations show that the buffer zone between convection zones is fully mixed and the upper boundary of photosphere is dynamic (see Freytag & Steffen 2004). The surface convection cells of A type stars are larger than in the Sun. The simulations can be used to calculate synthetic spectra from A type stars, which are presented in a poster by Oleg Kochukhov (No. 1 in Terrace I).

A list of applications was followed by a discussion of the solar oxygen abundance, which has a history of downward revisions. It was redetermined with a 3D CO5BOLD model atmosphere (completely independent from Asplund et al.) using 9 OI spectral lines. A fit to Neckel and Kurucz fluxes gives a value of 8.70+-0.05 (this is work in progress). The 3D effects are not more than -0.04 dex.

Conclusions: There are quantitative and qualitative differences between 1D and 3D atmospheres. The photospheric temperature is reduced in metal poor stars (metallicity of -2 dex, not mentioned in the summary above). The numerical simulations can be used to calibrate MLT for stellar evolution calculations. CO5BOLD 3D atmospheres can be used for detailed studies of spectral line formation in the presence of photospheric temperature inhomogeneities.




J. Trujillo Bueno: Radiative transfer modeling of the Hanle effect in convective atmospheres

This talk was about the Zeeman effect versus the Hanle effect.
Scattering experiments in the absence and in the presence of a magnetic field show that the Hanle effect reduces the line scattering polarization amplitude. This can be used to measure small magnetic fields, whereas the Zeeman effect is blind to small entangled magnetic fields. The Zeeman effect can only detect 1% of a resolution element.

For the Sun, voids in circularization patterns are detected by the Zeeman effect. Are the voids field-free?

As an example, modelling of the Sr I 4607 line profile and a fit to observations (Stenflo et al.) was presented. The Sr I line is a triplet, where the magnetic sublevels are unequally populated due to optical pumping by directional light. This is important if there is anisotropic illumination, which is the case at the stellar surface (limb darkening). To model this anisotropy accurately, it is essential to use 3D models. To fit the Sr I line, a mean magnetic field strength on the order of 100 Gauss is required to take into account the Hanle effect (no distinction between upflows and down flows).

Another example are the molecular lines of C2 (Swan system R1,R2,R3). Here, the Hanle effect requires a mean magnetic field strength on the order of 10 Gauss. Where does the discrepancy come from? The solution (presented in Trujillo Bueno 2003, 2004) is an anisotropy between downflows and upflows. The observed scattering polarization in (weak) molecular lines comes mainly from upflowing regions, whereas the scattering polarization strong Sr I lines comes mainly from downflowing regions.

Tuesday, August 22, 2006

F. Cattaneo: Challenges to the theory of solar convection (Review, Mon, Aug. 21)

The second talk of Symposium 239 was a historical introduction to numerical simulation of convection: During three decades of numerical simulation the simulations have proceeded from 1D to 3D. Here, one distinguishes between global (whole star) and local (small box) simulations (note that this is a different definition than in Canuto's talk). The punchline is that we have become very good at local ones and have much to do for global ones.

But first a detailed account of the evolution of computing power: From the IBM 370 mainframe (Megaflops) on to machines dedicated to number crunching (CDC 7600, Cray 1 in the 70s and early 80s), which were vector machines instead of scalar machines, to the Cray 2, etc. Machines became more and more compact, but at the end of the eighties the limit of compactness was encountered (problems with cooling, 265 MB memory). A principle change became necessary, and message passing was invented (the new idea of parallel programming) leading to cluster machines (early nineties). This architecture has basically prevailed until today (now with 100s of Gigaflops and 10000s of processors) and seems to be the way of the future. Algorithms have changed as well in parallel.

The talk continued with convection simulations - moving from Boussinesq to compressible. The effects of strong stratification (departures from Boussinesq) on stability and flow were structure covered: The center of action moves down, whereas upper and lower layer velocities are comparable.

The next topic was "buoyancy breaking". The role of pressure fluctuations is to enhance downflows and retard upflows, and downflows and upflows have different filling factors (see Hurlburt, Toomre & Massaguer 84; Massaguer & Zahn 80).

Several possible explanations to the unanswered question "Why does MLT work?" were proposed.

A discussion of the interaction between convection and other dynamical ingredients included interfacial motions between the radiative interior and the convection zone, concluding that "Whether we understand overshooting and penetration is an open question, but we can certainly model it." (Hurlburt et al., Roxburgh & Simmons, Malagoli & Cattaneo, Brummell et al.). Effects of rotation and magnetic fields were covered as well.

The last few minutes were about global simulations.
Early observations included surface differential rotation and the activity cycle of the Sun. Simulations (e.g. Gilman, Glatzmaier, Miller) were Boussinesq and could reproduce the equatorial acceleration. Later, we had helioseismology and simulations had higher resolution and included dynamo action.

Conclusions:
  • Local models are in a good state
  • Global models are not in a good state - cannot reproduce features of the Sun
  • What do we need?
    • More resolution? How much more?
    • Better physical understanding?
Another well-structured talk which gave a good overview of the topic.

V.M. Canuto: Theoretical Modelling of Convection (Review, Mon, Aug. 21)

Canuto started the symposium (239, Convection in Astrophysics) by giving a brief history of the theoretical modelling of convection. It starts in 1890 with Reynolds introducing the Reynolds stresses and continues with Boussinesq and the "down-gradient" approximation with diffusivity, which is the beginning of the Mixing Length Saga.

In 1929, Friedmann (the same as the one known from the Friedmann equations) wrote that the Navier-Stokes equations (NSE) should also yield an equation for the correleations of fluctuations, not only a dynamical equation for the mean components, which lead to the Reynolds stress model (RSM). But he was ignored - nobody continued in that direction.

Only in 1940, Chou published the first dynamical equation for the momentum Reynolds stresses. He treated only shear flows (no buoyancy) and the engineering community has since then used the RSM as a working tool. There was a nice story of how Chou was led to publish this equation, which I cannot reproduce here. The engineering community was followed by the geophysical community (early seventies) which used the RSM for global warming calculations.

Eventually, we get to stellar convection - its fortunes and misfortunes. Fortune: most of the convection zone is unstably stratified - the gradient is equal to the adiabatic one. Misfortune: the layers below the convection zone are stable but are affected by overshooting. The advantage of the convection zone is that it contains large eddies with long lifetime which carry most of the energy. One can use a mixing length invented by Prandtl (engineer, not astrophysicist!) to describe one large eddy or the Canuto & Mazzitelli model with many eddies. The disadvantage is that convection is non-local, but all models use the local approximation (which is bad because it is false). The overshooting zone is stable, contains small eddies and is therefore local, but difficult to model because of short life-times, vorticity, etc.

A very good paper was mentioned (even though it contains no equations at all), by J.D. Woods (1969, Radio Science vol. 4).

Now followed a detailed account of the Peclet number debate with lots of equations, as well as a discussion on the modifications that should be included in the equation for mixing and transport: Both shear and vorticity tensors should appear, and buoyancy and gravity waves should be included in the Reynolds stress tensor. Check out MNRAS 328, 829 (2001), which contains the complete algebraic espressions.

Further topics: "salt-fingers" (in oceans, correspond to molecular weight gradient in stars) and semi-convection and their influence on overshooting: salt-fingers cause larger overshooting, semi-convection causes smaller overshooting. Shear distroys salt-fingers (in models and experiments) showing that instabilities can act against each other.

After a few more equations, the non-local, so-called plume model by Morton, Taylor, Turner (1956) was mentioned but found to be unsuitable for astrophysics (because of assumption of small area of down plumes).

Some final words on turbulence:
It's not a source of anything! It is a very efficient distribution mechanism at zero cost (like superconductivity). Without an energy source it dies out. We need a formalism resilient enough to accommodate different processes without changing the rules of the game every time. The only formalism that can do that is the one derived directly from the NSE - the Reynolds stress model.

All this made for an exciting and at times entertaining first 40 minutes of the Symposium.

New kid on the blog

My name is Ulrike Heiter and I'm a senior postdoc at Uppsala Astronomical Observatory, working in the Stellar Atmospheres Group. I will continue this blog by summarizing some talks from the second week of the general assembly.
We will move on to something completely different - mainly "Convection in Astrophysics" (IAU Symposium 239) and probably also "Binary stars as critical tools and tests in contemporary astrophysics" (IAU Symposium 240).
Just as for the previous bloggers, please accept in advance my apologies for errors or omissions. Enjoy reading and don't hesitate to write comments!

Monday, August 21, 2006

J. Casares: Observational evidence for stellar-mass black holes

This is the opening review talk of Symposium 238 on Black Holes: from stars to galaxies asross the range of masses. This will be the last talk I listen to before I have to make my way to the airport, but Ulrike Heiter, a collegue from Uppsala who works on stellar atmospheres, said that she might write a few things from S239 about Convection that starts this afternoon and will go on for the rest of this week.

Now to the talk by Casares: X-ray binaries are believed to be a stellar-mass black hole oriting a normal star. From the study of the radial velocity shift, one can determine the orbital period of the system, and from additional information on the star and its mass and properties (inclination of the system), the mass of the black hole can be calculated. The first example for this was 30 years ago.

The reason why these systems shine in X-rays is that material streams from the "donor star" towards the BH and forms an accretion disk around it that gets hot enough to do that. Depending on the type of the donor star, the accretion disk may even be optically brigher than the star during an active phase.

The basic argument of course is, that when you find an object with a high mass that orbits closely with a star, but you cannot see it and physics tells you that there is no way to have such an object withstanding its own gravitational pull, it must be a black hole. However, the number of dynamically confirmed BHs is still quite low (~20).

Also the lack of pulses and X-ray bursts indicates that there is no hard surface onto which things can bounce.

So how many are there and what is the mass-spectrum? Extrapolating from the known numbers tells that there should be 1000 dormant X-ray Transients (i.e. binaries) in our galaxy (this fits with binary models), but from stellar evolution, there should be 10^8 BHs in the Milky Way, out of which only the tip of the iceberg can be seen as XRTs.

The 15 reliable mass estimates range from 4-15 solar masses with more objects on the low-mass end. The most massive ones seem to lie above the values predicted from stellar evolution (Fryer & Kalogera 1999), but we are talking small-number-statistics here (2 objects).

G. Tancredi: Activities of the Observatorio Astronomico Los Molinos, Uruguay

What can be done with a small telescope and a CCD-camera? Quite a lot. The speaker lists several projects that they do at their facility, including confirmation of near-earth objects, comet identification and photometry, asteroid photometry and astrometry. The common-day follow-up observations are valuable contributions to the scientific community and also attacking region in the sky that are rarely studied is a productive niche.

O. Alvarez: Planetario Habana: a cultural centre for science and technology

The funding for this planetarium came internationally (maybe from IAU, I did not get that) and they use it to build up a center for the teaching of science and technology in central Habana. It is integrated with the museums of the city and will promote astronomical knowledge to the public, including cosmology. Architecturally, the big sphere inside the building that will hold the planetarium represents the sun and there will be models of the other planets in the same scale.

Opening will be in the end of 2007.

An inportant comment was made, namely to get to ineract with the teachers and provide help for them and a special program that is different from the popular show. Many planetaria seem to have problems to keep a steady audience that is used to visually impressive films and shows.

P. Rosenzweig: Encounters with science at ULA, Vernezula: An Incetive for Learning

This is about a program to establish science on all levels of education in order to counteract the lack of interest in science and the deterioration in the learning of science. They provide well prepared personnel which can aid faculty members who want to improve things and they organise events ("Encounters with Science") for children at the school of science where an extra effort is made to fight the impression that science is hard and that scientists are heartless, boring people.

These events have many stands with experiments (with much voluntary work from students) and are very popular with several thousand participants and intensive media coverage. This initiative from the most western part of Venezuela has spread over many parts of the country and will soon be held for the seventh time.

J. Fierro: Astronomy for Teachers in Mexico

This talk is about basic education and with a wonderful metaphore (ape-mother teaching the use of tools) she points out the basic structure of learning which includes the natural interest of children and practical experiments.

The speaker was adressed by pre-school teachers with 650 questions of the children and there were books written about how to answer them. These books are very helpful for and popular among teachers. Several other books are presented and she throws a copy of each into the audience. :-)

In middle school, where pupils think more about sex than science, the curriculum is less on astronomy and more on social problems and there are books by the speaker where different issues are adressed in a popular but scientific way.

Finally, she stesses the importance of teachers and of finding good ways to teach, because education is the most important way to leave underdevelopment.

H. Levato: Formal Education in Astronomy in Latin America

The speaker starts with an overview over the countries and places, where astronomy can be studied both at undergraduate and at graduate level. The amount of activity and students scales with the size of the country. 90% of the 500 PhD students in astronomy are in Argentina, Brazil, Chile and Mexico.

There is an intermediate group of countries, where there is serious effort in astronomy, but it would take more resources to consolidate their astronomy programs. The largest number of countries however have seriuous deficiencies in that respect. The speaker also found a correlation between the astronomical effort and the reply-time to emails. :-)

Although there are many astronomical facilities in latin america, it is the people who write the papers and it often is manpower which is the limiting factor.

In a comment it was pointed out, that Venezuela probably should belong to the first group, which the speaker already had suggested, but with a question mark.

J. Ishitsuka: A new astronomical facility for Peru: transforming a 32m-antenna into a radio telescope

There are some big antennae around that are not used anymore, because communication has been replaced by other means. Making telescopes out of them requires expertise which not necessarily available. For this project in Peru, they collaborated with japanese astronomers.

The tansformation of this satellite communication antenna shall start radio astronomy in Peru, create radio anstronomers by gathering knowlege and of course promote international collaborations. The antenna is good enough to go up to 2.2 GHz and the site is high up, remote and has good conditions. The location on the globe also makes it interesting for Very Long Baseline Interferometry. They have a working reciever and are well underway.

S. Haque: The Caribbean view from the ground up

Intitially, the drop-down list in the registration form for this meeting did not contain Trinidad (the speaker's home) - this was corrected. With a country of one million inhabitants and two astronomers, they are approached from all sides of society, also religious, for information about calenders and the sky. There is an effort on online-teaching and there are popular events like "star-parties", however classical seminars are widely ignored.

Astronomy is in the primary school curriculum, voluntary student work is very important and at university level they have succeeded in sending students to internatonal winter schools and universities. Research has mainly been theoretical, but now also contains others, like astrobiology.

They have a 46cm telecope, mainly used for monitoring quasar variability.

R. Kochhar: Promoting astronomy in developing countries: a historical perspective

Is astronomy a "western astronomy"? There has been astronomy going on all over the world during mankind's history. The speaker tries to get attention to insensitivities that for example are written in the history section of textbooks. The "cultural perspective" should be taken more into accont.

J. Hearnshaw: A survey of published astronomical outputs of countries 1976-2005

I could not resist to return to the meeting anyway before I fly back to Sweden tonight. I was tempted by the Session about the "Virtual Observatory", but I guess one can find out about that on the web anyway.

Therefore, I am sitting in the Special Session 5 on "Astronomy for the Developing World" right now. I only got the last minutes of J. Heranshaws talk, but the summary contained the following:
- There are 1.39 astronomers per million population over the world.
- There are 9000 members in the IAU.
- The majority of papers is published by IAU members.
- 112 countries have no IAU members, but 3/4 of the world's population live in IAU member-countries.
- The GDP of a country correlates with the number of it's IAU members.
- It also correlates with the number of papers published.
- Since 2001, there has been a rapid increase in multi-national papers and large collaborations.

Saturday, August 19, 2006

The Planet Issue

Below I commented on the link to this blog from Seed Magazine where they said that one could find out about Pluto being a planet or not in this blog. I wrote that I will not write about this issue at all because I find it unimportant. They now have even added a reply, correcting their "mistake". :-)

Maybe my choice of words was provocative, but for some reason, most of the links to this blog seem to be about these three lines and how bad they are. I will now list the major critics and adress them. Some wrote that:
1. I was a snobbish extra-galactic astronomer bashing planetary science.
2. I had forgotten where my money came from and that popularising is important.
3. Even if this topic may be unimportant, it attracts attention and all popular attention to astronomy (or the IAU) is good.
4. I should have written about it.

My replies:
1. This could not be further from the truth. Yes, this subject is quite remote from what I do myself, but finding out how the solar system came about is great science and with the discovery of extra-solar planets, this topic is deservedly becoming more and more popular among astronomers.

2. Of course it is! Popularising is immensely important and I think that every astronomer is aware of that. But isn't it somehow logical, that I try to do that in my own field? To justify that what I myself do is interesting and important? If that is not possible and unapplicable, I totally agree that planetary science is a popular topic. When I do popular shows at our old refractor I certainly do not point at feeble galaxies, but at the moon and the planets.

3. Now we come to the point where I disagree. Attention for astronomy is good, yes. The topic, if Pluto is a planet, and how to define a planet has gotten attention, yes. Does that make it good automatically? I think not and here is why: It is semantics and not science. It creates the (almost totally wrong) impression in the public that what astronomers do is sitting in committees and debate what a planet is. Is that what they are willing to give tax money for instead for new discoveries? I doubt it. In addition, it takes the media attention away from real science (including planetary, in case you missed point 1.) and I think I am far from the only one who feels misrepresented by this issue.

4. Must I write about everything? I think everyone has to make choices because you cannot cover all. I wrote about this meeting from my own perspective and the meaning was to give people a glimpse of what is going on here. I chose to ignore the "planet issue" and hope it became somehow clear why.

Friday, August 18, 2006

M. Wood-Vassey: The ESSENCE of dark energy

There are different approaches to measure w (Baryon accoustics, Galaxy clusters, Supernova Luminosity distances and an on-the-spot addition of weak lensing (was fun watching the speaker change his slide...). For supernova distancies the systematics have to be understood, and get the statistics right (finding the correct confidence limits on w).

M.W. then made some general remarks that were quite funny, and I also wholeheartedly agree on most of them.
Thoughts for observers: Design your experiment, design your analysis, test your analys, ignore the theorist, get a pet theorist.
Thoughts on theorists: Too many theories (quintessence as an example), far too many models (can fit anything)-> All M.W. wants is a well-motivated theory.

The ESSENCE survey is a 6year project on the CTIO in Chile. Data released immediately after reduction, using 2 filters. Getting SNe at z~0.7. They are cross-checking the results using the SNLS SALT. They get w=-0.88 (0.12), but it's still consistent with w=-1 (error margin only 68% confidence). Essence is using a similar approach as the the project I'm working on for subtracting images and detecting SNe.

P. Garnavich: What do host galaxies of Ia supernovae tell us

It's important to understand host galaxy properties of Ia SNe to beat down the systematic uncertainties in the SN photometry => better determination of cosmological parameters.

Ia SNe ar not good standard candles (vary by a factor of 10-30). Dust extinction laws has to be used to correct or dust in hosts. This uses the obserevd colours of the SNe. Measure the host galaxy properties to constrain this and other uncertainties: (i) Metallicity; (ii) Star formation history; (iii) Dust properties.

A strong division of Ia properties in different host morpholgies is known and confirmed with the current sample. Fast declining Ia's preferntially in spiral/SF galaxies (need more than morphology, spectra better than colours). Perhaps due to a brightness-metallicity relation? No clear trend found for decline rate vs host metallicity. Fast SNe found only(!) in hosts with very low SFR (more important than the metallicity). A correlation of SFR in galaxies and numbers of bright SNe how that brighter SNe are more common in galaxies with higher SFR. No clear change with redshift of this is detected (SNLS result).

There are many more high SFR galaxies in the field than as SN hosts, P.G. claim this is support for delay times of Ia's (or rather a second delay time "channel", having Ia SNe that are not related to the ongoing SF in the galaxies).

R. Ellis: New constraints on the comoving SFR in the redshift interval 6

A declining UV luminosity density at z>3 is found for dropout galaxies. Still a bit controversial, mostly due to big errors on the initial measurements at these high redshifts, but most people agree that it's real. Another interesting discovery is the luminosity-dependent evolution discovered in dropoout galaxies (cf. talks by R. Bouwens and I. Iwata).

An independent check: the accumulated stellar mass has to be produced. The found SFH's must be consistent with the assembled stellar mass. R.E. uses GOODs v-dropouts to estimate the stellar mass density at z~3-4. Part of the the sample have spectra, use these to calibrate the method (?, he went pretty quickly through those slides). They parametrize the SFH and compare with the found masses. The result is that there is too little SF going on to account for the stellar mass history. These stars could be formed in low-luminosity systems that occupy the faint end of the LF (and suffer from incompleteness).

A survey of lensed galaxies around a number of Abell cluster, have been able to deetct objects at very high z. Six objects have been found at z~10(!). The lensing is redshift dependent in the way that a specific region ("isophote") of the cluster is where objects of a certain redshift will fall. => bias against low-z galaxies. These galaxies might(?) have Lalpha emission, but is very faint in that case. They might contribute significantly to reionization.

D. Koo: CATS: Center for Adaptive optics Treasury Survey of distant galaxies, SNe and AGN

AO is very expensive, this survey is focused on observing the already well-studied fields (GOODs, GEMS, EGS).

AO for distant galaxies is valuable becasue: (i) Good match of psf cf HST; (ii) galaxy components have sub-kpc sizes ideal for AO (z~0.5-5);(iii)optical regime shifted to NIR. Why not use AO? Need AO stars, PSF is uncertain (problematic for SN detections?), low efficiency. Laser guide star is in use, this increases the possible area for doing AO.

Study merging galaxies (resolve components) and comparing to stellar synthesis models can give information on merging processes at high redshifts. Can find SNe at high redshifts inside galaxies at NIR wavelengths (one found at z=1.24), but need nearby PSF star and give precisions of ~0.1 mag (which to me suggests that they have problems with the photometry).

The second week

I just tried to convince a collegue, who will be here during the next week, to continue what I started. I've said it before: If you yourself are an astronomer at this meeting and are interested in blogging here, please drop me an email (thomas.marquart (at) astro.uu.se) and I'll add you to the list. The technical part is easy.

Not quite the end

I will post some more of the talks I've attended, there have been quite a few really nice talks during this week.

On a sidenote...
Arp220 optical image

Arp220 corrected for dust, slightly smoothed using a Voronoi beer algorithm

The end?

First of all, thanks to Jens, who also wrote some summaries during the last days and posted them below.

The poster session is over (I got some nice feedback from people), I have checked out and will soon meet the guy with whom I'll be staying over the week-end (with HC). This means the conference is over for me and I certainly enjoyed it.

During the week-end, I'll finally get to see Prague. There is a slight chance that I will attend some talks on Monday before I fly back to Sweden in the evening. If not, I will at least address some of the critics that I got for turning down the "planet issue" below.

It was fun to write this blog and it felt good that it at least got some attention. Thanks to all readers. At times it was more effort than I expected to simultaneously listen, get the major points and rephrase them in own words. I will have to read over all of it after a while to judge for myself if I succeeded or not.

R. Abraham: Morphology, the 6th road to downsizing

A very apt title, the topic of downsizing have been discussed in almost all of the talks in the galaxy session so far.

Five roads to downsizing:
- Massive red and dead galaxies
- Mass density evolution
- Mass-segregated SFH
- Abundant post starburst pop at z~1
- Evolutio of mass-metallicty relation

R.A. worry that this might be a "bandwagon" that everyone is jumping on. Galaxy evolution sweetspot is at z~1-2, highest derivative of mass assembly is at this time. Playing devil's advocate he finds that if the z~0 points of mass essembly is correct, 50 % of massive galaxies are from between z~1-2. But this might be a problem of large errors in the derivations.

Use ACS observations of galaxy morpholgies at z~1-2 to investigate how the mass in stars have changed from 2 to 0. When doing this you have to make sure you're going deep enough, that you have a sufficiently large area (should be larger than HDF) and to misapply the assymetry vs concentration diagram (S/N or completeness problems perhaps, not sure I got that, it could also be a question of which filters are used?).

How to measure morphology? R.A. thinks that concentration is not the best way to do this, rather use a more general statistic => Gini statistic/coefficient. This coefficient seems to be more robust than the concentration parameter. He finds that at z~1 about 70% of the stellar should exist in the early-type galaxies and that there is strong evolution in this fraction in z~1-2. Another conclusion is that mass density evolution of early types + assymetric early types is similar to the evolution of post-starbursts at z~3.

C. Popovich: The star formation and assembly of high redshift galaxies

Observational constraints regarding massive galaxy formation, (i) when did the stars form; (ii) when did massive galaxies arrive at their current configuration. Comparison to theoretical predictions. Massive galaxies at high redshift are found in the red sequence of colour-magnitude diagrams (COMBO-17).

Luminosity density evolution in the red-sequence galaxies is more or less constant (mild evolution), this shows that passive evolution is not the complete explanation. Looking at SFR for galaxies of different masses show that massive galaxies start to dominate the total SF at high redshifts (>~2).

Using Spitzer 24micron observations it's found that massive galaxies at high-z (~1.5-3) are in an IR-active phase of evolution. Looking at the specific SFR C.P. finds that at z~1.5-3 massive galaxies form stars as fast or faster than the cosmic average. At z<1 the galaxies have already formed their stars and the total SF is dominated by lower mass galaxies. (Cf. talks by many others, this seems to be accepted). Understanding of the role of AGN is also important to explain these observations.

Agreement between SFRs from X-ray, IR, submm, UV by a factor of 2. This is good because Spitzer 24 micron probes restframe mid-IR at z~2 => use this data to get SFRs for these galaxies. This show an increasing cosmic SFR out to z~2. These observations might not be in line with hierarchical models (De Lucia et al. 2006).

New blogger entering

My name is Jens Melinder and I'm a PhD student at Stockholm Observatory working on detections of supernovae at high redshifts. During the first week of the general assembly I've attended talks in the Galaxy evolution session (S235), in the Universe at redshifts>6 session (JD06) and the Supernova session (JD09), my blogs will be about my impressions of some of the talks and a short summary of the topic covered in them. Disclaimer: I make no claims to getting things right when writing the notes/summary, in many cases I might be totally wrong...

Friday's star formation

I missed some talks in the morning, but sleep is important, too. I came in time for the last three talks of S237 on "Triggered Star Formation in a Turbulent ISM" and all three were interesting.

C. Norman was talking about his theoretical work on disk simulations and what struck me a little was that people have enough confidence in the simulations being a good approximation for the real world, to do physics with "observations" on the simulated world and derive properties and laws from there. Of course, this is tempting since one has complete control over the simulation and can get much better "data" than in the real world. There are tons of arguments and tests that reassure and convince people that this really works and, indeed, why should a program that implements a well-understood physical process fail, except for (also well-understood) limitations like resolution and other approximations. But still, if one could observe it, one would not need simulated data, so since we cannot, there is also a lack in testing the simulations to the real world.

Using the density distribution function of the ISM (log-normal except at very low densities) and a scaling relation for the critical density where SF sets in, Norman showed an alternative to the Schmitt-law (also called Kennicutt-law) that has a more shallow slope and flattens out at high gas surface densities. The observed slope then has to be understood by a change in SF-efficiency, which offset the model with respect to each other. Since the observed correlation is very tight, this would mean that also SFE correlates tighly with gas density.

The next speaker, M. Krumholtz, started from simple arguments to understand why SF is so inefficient in the sense that the SFR would be 50 higher (both in the Milky-Way and in extreme cases like Arp220) if all the molecular clouds that are present would collapse and form stars. I cannot reproduce the whole line of argument now, but he also used the density distribution (depending on mass and virial parameter) in the turbulent medium and integrated over the region above the critical value to get SFRs which fell into the observed regime. This could be tested, if the census of molecular clouds in nearby galaxies would stretch to smaller masses, by simply comparing the predicted SFR as derived from the molecular gas content with the observed one.

The last talk and summary of this symposium was given by B. Elmegreen and he again stressed the importance of turbulence for SF, instead of the older picture of monolithic collapse of a molecular cloud.

In the afternoon, there will be a "poster session", which basically means that everyone who has a poster, stands by it to answer the questions of the people coming by. The problem is that, although not all have posters, many do and one has to find the balance to also look at the other posters and talk to the author, but that won't work if they just do the same thing. Surprisingly, it always works out somehow anyway and I am happy that they scheduled this also for S235 for which no poster session was planned initially.

Thursday, August 17, 2006

M. Hudson: "Downsizing" from the fossil record

This is the final talk of Symposium 235 on Galaxy Evolution, but there will be much more other things going on tomorrow that are worth writing about.

Downsizing was mentioned frequently during the last days and in the presented survey, red, emission-line-less cluster-galaxies are used to measure the "fossil record" of galaxies, i.e. the old stars. The thousands of spectra are sorted by velocity dispersion (a measure of the total mass) and stacked together to get high quality average spectra with many spectral features that can be analysed to get ages of the stellar population.

Tey find that the smaller galaxies have smaller ages, i.e. downsizing. The age-spread is much larger at low masses than at the high-mass end. There was no morphological selection but of course it is ellipical that dominate the sample. The S0-type galaxies are slightly younger than Es with the same sigma, but this trend is weaker than the trend with sigma itself.

Comparisons with the total dynamical mass (also using Sauron-data) there is little room for dark matter (25%).

The ages also correlate with environment, i.e. distance from cluster center (16% change). Again, this is not a strong trend. Metallicity does not show a trend, but alpha-enhancment does. The tilt of the distribution in a color-agnitude diagram comes half from ages, half from metallicity.

By calculating backward, how te CMD would have looked for these galaxies at some earlier time, they can be compared to CMDs at some redshift.

O. Gnedin: The formation of dwarf galaxies and small-scale problems of lambda-CDM

The Lambda-CDM cosmological model works very well in predicting large scale structure. It however predicts many more dwarf galaxies than are found. This is called the "missing satellite problem" and it is not just a few missing, but it should be ten times more.

The solution may be twofold: fist of all, only the more massive of satellite halos may be able to retain enough gas to form stars and thereby are seen by us. In addition, subhalos evaporate due to tidal forces, once they come close to "their" big galaxy. It has been known since long from studies of the Local Group that different types of dwarfs live at different radii from the large galaxies and there seems to be an evolutionary connection.

A new method of measuring the DM-halo of the Milky Way are hypervelocity stars that move at 500-1000 km/s with respect to us. They probably have been slingshotted by the black hole in the center of our galaxy. By following and calculating the paths of these stars, the shape of the DM-halo can be determined and according to CDM, it should be triaxial. This test will yield first results in a few years and it is a good test of predictions from lambda-cold-dark-matter cosmology, which is the widely accepted picture of our universe.

C. Conselice: Galaxy Interactions and Mergers at High Redshifts

When do galaxies merge? The merger fraction does evolve lowly up to z=1.2 but at around 2-3, 50% of all high-mass systems are mergers. The small ones again have only slightly higer merging rate. So at z=1 most of the high-mass objects were in place.

Using the same methodology to find mergers on numerically simulated data (with C. Mihos), they derive absolute merger rates (per volume) and a sharp drop after z=1 is found for all masses. A typical massive elliptical galaxy (today) will have undergone 3-5 major mergers since z=3.

A significant fraction (maybe the majority) of SF at z<1 is produced by interactions and mergers. I think this is last point is still debated and there have been contradicting results, e.g. showing that interactions dot not really increase SFR as much as one would think.

D. Elmegreen: Clumpy Galaxies in the Early Universe

By looking at the Hubble Ultra Deep Field (UDF), one can classify galaxies by how they look. The number of clumpy and irregular looking galaxies increases as one looks at further and further distances. Disk galaxies seem to disappear at a certain redshift and only thick disks are found.

Clumpy galaxies seem to be more frequent at high z and it is basically the large star forming regions that are seen there. These clumps should dissolve and could build up a normal spirals. Indeed the "clump clusters" share several properties, altough they are less massive. The scale height of "clump chains" is found to be 1kpc, which could be connected to forming a thick disk.

A usual problem here is that one looks with a fixed set of filters (opitcal in this case), but due to the redshift, one looks at different wavelenths inside the galaxy. In this case, one admittedly only sees the regions that actively form stars and a much smoother underlying population of older stars would not be seen.

R. Bouwens: Galaxies Buildup in the Frist 2 Gyr

UV-luminosity functions at z=4,5,6 are presented and no evolution is found at the low mass end and the slope is steep there (-1.75). However, at the high-mass end things get brighter with time. This is the opposite of downsizing that I wrote about yesterday.

Converting this to the Madau plot means that it peaks around z=4 and declines towards 5 and 6. This is heavily debated since the dust-correction at high z is fairly uncertain.

Going even further (z-J), they found 4 candidates of z~7-8 galaxies (Bouwens & Illingworth, Nature 2006).

T. Wilkind: Massive and old galaxies at z>5

If galaxies form hierarchically, i.e. big ones form by the merging of small ones, shouldn't big galaxies then appear rather late in the history of the universe? One cound think so, but would be mistaken. Tommy repots on their finding from last year of a massive galaxy at z=6.5 that is red in color and show no ongoing star formation at all.

Is the presence of such an object that has finished forming all its stars at so early times a threat to the lambda-CDM cosmological model? This first of all depends on how many of these really exist. They look in the K-selected GOODS-south sample and find 18 candidates out of which 5 had to be discarded as being something else.

So they have 13 galaxies at z>5 with over 10^11 solar masses in stars and no ongoing star formation (although 50% are detected in 24micron). Correcting this for completeness gives a rather high number density which indeed opposes the lamba-CDM paradigm (too many as compared to existing DM-halos at that time), unless these estimates are either flawed in redshift or stellar-mass-estimates.

M. Steinmetz: Cosmic Web - Simulations

The "Cosmic Web" is the structure that arises in cosmological simulations and that is also observed: the universe is clumpy and most of the matter is in huge filametary structures that are made of and connect galaxy clusters.

In simulations the control of the dark matter is much easier than normal matter since it only interacts by gravity. With normal matter, one needs recipies for handling star formation, hydrodynamics have to be taken into account.

Very important: he shows in simulations that a merger may actually look like a disk in kinematical data and he warns the people around Genzel who find "rotating disks" at high redshift. I have to find that movie/paper on the web.

Simulated disks nowadays however seem to fit nicely wit observed ones when it comes to angular momentum. A comparison of a merger with and without AGN feedback is shown and it helps in the sense that otherwise simulated galaxies are too centrally concetrated as compared to real ones.

To test if angular momentum is induced to disks by tidal torques from the cosmic web, it is possible to check the orientations of disks. Indeed there is a correlation between the large scale structure and the orientation of disk galaxies.

The Milky-Way dark matter halo seems to rotate with around 100 km/s as derived from halo star kinematics. But I might have gotten this part wrong. :-)

L. Portinari: Cosmological formation of disk galaxies and the Tully-Fischer relation

The Tully-Fischer relation (TFR) for disk galaxies relates the absolute luminosity to its rotation speed. The angular momentum in simulations however is difficult to match with observed values and this is most probably due to the simplified treatment of baryons and thereby, again, feedback.

The question if the disk forms from a cooling flow from hot gas (at virial temperature) or by cold accretion is adressed and X-ray observations can give important clues here. Birnmoim & Dekel 2003 found less than 10% of the expected amount of hot gas, so cold accretion might be favorable. Gas does not need to be heated to virial temperature and there can be cold gas accreting along filaments.

They found an offset in the TFR for certain models but it was hard to grasp which objects they were, but one solution is claimed to be dynamical friction, i.e. the galaxy rotates slower than it should for the same luminosity.

Portinari et al. 2006 look at the evolution of the TFR from z=1 and find no significant mass evolution, while the individual objects gets to almost twice its mass.

A. Shapley: Galaxy Formation in protoclusters at high redshift

Thousands of UV-selected galaxies at z>1.5 with spectroscopic confirmation from Keck. 25% contain AGN. From clustering length (4 Mpc) the DM halo mass is derived to roughly 10^11.5-12 solar masses and these objects are presumably the progenitors of nowadays ellipticals (by following halo-evolution in simulations).

The highest X-ray detected cluster is at z=1.45. The speaker and collaborators find protoclusters at z>2 also from UV and measure/find the overdenities (factor 7) in a redshift subslice. The galaxies there have double stellar mass than the ones outside the cluster. They find the morphologies not to fall on the Hubble sequence, but I wonder if they took into account that even normal galaxies look very different in different wavelenth, especially in rest-frame UV which the HST images were made in, if I got it right.

M. Franx: Properties of galaxies at z=2-3

Between z=2-3 presumably many galaxies build the bulk of their stars and one has to have control over sample properties and selection effects.

The authors and collaborators select galaxies in rest-frame optical which means deep NIR-imaging with VLT in that case (MUSYC-survey). They place a mass limit at 10^11 solar masses. At this massive end, the red galaxies dominate the population (77%) already at that time.

But these red galaxies are not "dead", but still show significant dusty star formation. In the U-V over V-J diagram, a large part of the population lies below the local population.

Clustering correlation length correlates with J-K color (Quadri et al. 2006) which means that redder galaxies ar emore clustered.

F. Walter: The first galaxies and AGN

Quasars are not found at very high redshift. The record holder has been at z=6.4 for quite an amount of time and at this redshift, the universe was about 870 Myr old.

A fun fact with redshifts is that you observe different wavelengths in objects when use use a certain wavelenth for observations. Because there is a large peak of emission from dust in the mid- and far-infrared, this gets shifted into the mm-rane at high redshifts and thereby, an object that is much further away may not appear fainter than a closer one at all.

Molecular emission has been detected to redshifts over 6 as well and of course only the highest concentrations can be detected at these large distances. One can get hold of the ionisations state of the IGM via the proximity effect, which presumably is due to a large ionised sphere (formation time=10^7 yr * neutral gas fraction) and lets emission a little blueward of the lyman-limit at the quasars redshift escape.

S. Silich: Super-massive star clusters: from superwinds to cooling catastrophe to the injected gas reprocessing

How quickly does the gas that is heated by the SSC cool? The larger the cluster the more radiative cooling (I did not understand why) and eventually there is a regime of "catastrophic cooling" where no equilibrium solution exists any more.

Studies in M82 which blows a huge bipolar wind perpendicular to the disk (pretty picture) show that .... I missed it because I fetched the link to the picture. :-)