Concerning the lack of salt industry in pre-European New Zealand and other tales from Polynesia and the region.
By: Eben van Tonder
8 July 2008
Salt was “critical to the development of all complex societies, making its production significant to research on ancient society.” (Flad, et al.; 2005) There is no question about it. A transition from hunter-gatherer to an agriculture based society; population pressure; movements away from the sea to the interior; all these are directly related to the specific extraction and mining of salt. Reasons behind this lie in the utilitarian value of salt. What is it good for. Secondary reasons exist such as its value as a form of currency that could easily be weighed, packed, transported and stored for indefinate time periods. Flad states that “its physiology role in human and mammalian biology, its cultural significance, its role as dietary supplements, its function as meat preservative, its key function of facilitating trade either as a form of currency or as a key bartering commodity, thus facilitating population growth, and its importance as markers for ethnicity and class in culinary traditions.” (Flad, et al.; 2005) They correctly make the bold assertion that “no states are known to have developed without stable access to salt.” (Flad, et al.; 2005)
There are other functions of salt many researchers miss due to a narrow focus on sodium chloride as salt. Of course, this is not warranted because this is not how salt occurs in nature. Salt, as a combination of an acid and a base, exist in many forms. Some of the most famous from antiquity are sodium chloride, potassium, calcium or sodium nitrate, ammonium chloride, magnesium sulfate and sodium bicarbonate. It also occurs in combination with a wide variety of minerals and other chemical elements.
Unraveling the different salts and the ability to separate them is the story of the development of modern chemistry and modern technology itself. The technical underpinnings for a culture to advance in terms of glassworks, and different metals such as iron depended on their understanding of different salts and how to separate and refine. More than that, it is an understanding of salt that ushered in the age of gunpowder and brought with it the enormous benefit in terms of a nations military capability. Not just were advances in technology related to salt the key to a nations military power, but it also became the basis of modern agriculture in the various nitrate salts, ammonium, and ammonia. Understanding its value even reach back to the start of animal husbandry without it, this development would not have succeeded. It is therefore not an overstatement to say that no culture could ever achieve full independence or mastery over its own future without a better understanding of salt. Without it, there would have remained unsurmountable obstacles in its ability to manipulate the forces of nature for the common good and for its own independence.
It is for this reason that I was completely taken by surprise when I discovered that salt is absent from Māori culture and customs. Here I first try and understand who were the Polynesians who populated New Zealand and what we can learn about the knowledge of salt and salt extraction technology in the broader region. What did the Polynesian settlers of NBew Zealand know about salt and since when did they know it? I discovered that the same reason why chickens did not survive the initial colonialisation of New Zealand by Polynesians, is the same reason why they did not mine or refine salt, at least from salt water. How true this statement is will become clear soon.
The complete absence of salt sources.
When considering salt in New Zealand, the first fact to understand is that there is no rock salt and no salt marshes on either of the two islands. There is no record of salt ever being extracted, refined or traded in any pre-European time in New Zealand history. This fact seems remarkable in light of the sophistication of Māori culture. The fact that they are surrounded by sea means that they had an easy access to salt sources, right from the start. Let’s look at the general Polynesian culture and others who influenced the region with their customs and technology.
Who are the Polynesians?
First, we need to define what area we are talking about when we refer to Polynesia. “Polynesia is … the islands found roughly in a triangle formed by Hawaii, Aotearoa-New Zealand and Easter Island (Rapa Nui).” (Matisoo-Smith, L. and Denny, M.;2010)
Now we can start looking at the neighborhood in which Polynesia is located. We begin by looking at human migration globally before we focus in on Polynesia and its neighborhood. Which were the original homelands of the people of Polynesia that would have impacted on their culture and technology?
Out of Africa
Let us remind ourselves of the current thinking of human migration through the ages to put the Polynesian migration into context. Many of my friends will take issue with the model presented below, but it will at least open the discussion.
Current data seems to indicate “a migration of anatomically modern humans out of Africa around 150,000 – 100,000 BP (Years Before Present), moving east towards Asia and north into Europe. Part of this migration reached South-East Asia by 60,000 BP. Populations of these stone-age hunter-gatherers then expanded from Southeast Asia into the Pacific through New Guinea to Australia and the Bismarck Archipelago by about 45,000 BP. Once in Southeast Asia and Australia, the movement of humans into new areas stopped for nearly 30,000 years. A later wave of expansion out into the rest of the Pacific took place began around 3,500 BP. In this migration, the people went east to Samoa and Tonga and from there north to Hawaii, further east to Easter Island and south to New Zealand. This was the last major human migration event.” (Matisoo-Smith, L. and Denny, M.;2010)
New look at likely migration patterns into Polynesia
Where did the Polynesians come from, genetically and what cultural influences did they have? How dynamic was the interaction between the different Polynesian communities which will give us an indication of the dynamics in cross-cultural exchanges? These are important questions since answering them will allow us to hone in on the right culture, at the right time in an attempt to verify the assumption that salt and its production methods were thoroughly entrenched into Polynesians culture. At least by the time when New Zealand was colonised.
Cultural and linguistic analysis identified the Polynesian’s to have originated from Taiwan around 4000 years ago. Recent studies rely on the insight from the more reliable genetic code of current occupants of these lands as well as coding from Polynesian rats, dogs, and chickens and contradict this theory.
Two studies are of interest to us. The first is work (see Note 1) conducted by Lisa Matisoo-Smith, Professor of Biological Anthropology at the University of Otago and Principal Investigator in the Allan Wilson Centre. Her research focuses on identifying the origins of Pacific peoples and the plants and animals that traveled with them, in order to better understand the settlement, history, and prehistory of the Pacific and New Zealand. Her research utilises both ancient and modern DNA methods to answer a range of anthropological questions regarding population histories, dispersals, and interactions. I rely on lecture notes published.
“Her work led her and her coworkers to suggest a new model for Polynesian origins, based on an existing framework for Lapita origins suggested by Roger Green in 1991. The first human settlers of Remote Oceania are associated with the Lapita culture, which first appeared in the Bismarck Archipelago in Near Oceania around 3500 BP. (An archipelago is a chain or cluster of islands formed from volcanic activity).”(Matisoo-Smith, L. and Denny, M.;2010)
“The Lapita culture is named after the distinctive patterned pottery, which was first found at a site called Lapita in New Caledonia. Anthropologists are very interested in who the Lapita people were and what role they played in the settlement of the Pacific.” (Matisoo-Smith, L. and Denny, M.;2010)
“Remnants of Lapita pottery are now found throughout many areas of Remote Oceania, which suggests that the Lapita people were the first to settle this area. The age of the pottery remains found in each area supports the idea that this settlement spread from west to east from Melanesia into Polynesia.” (Matisoo-Smith, L. and Denny, M.;2010)
“Evidence such as this suggests that the Lapita people are the ancestors of modern Pacific peoples, but questions remain about whether there could also have been contributions from other populations from Asia and Micronesia at later times.” (Matisoo-Smith, L. and Denny, M.;2010)
Here are the key ideas of the new model for Polynesian origins developed by Lisa and her colleagues, based on an existing framework for Lapita origins suggested by Roger Green in 1991:
1. The Lapita colonists in West Polynesia and the rest of Remote Oceania look very much like the current indigenous populations of Vanuatu, New Caledonia, and western Fiji.
2. Around 1500 BP a new population arrived in Western Polynesia with new and more typically Asian derived physical characteristics, and mtDNA lineages.
3. These new people also introduced new mtDNA lineages of commensal rats, dogs, and chickens.
4. There were intense and complex interactions with the existing Lapita-descended populations as they spread over West Polynesia.
5. This resulted in the formation of the Ancestral Polynesian culture, who then dispersed east, and north into the rest of Polynesia.
This possible scenario is shown in the figure below. The grey arrows show the initial Lapita expansion through Near Oceania and into Remote Oceania. The dotted arrows show the proposed arrival of new population (or populations) from Asia into West Polynesia. The black arrows show the settlement of East Polynesia and a back migration into Melanesia.
Secondly, I looked at a 2011 study by Soares, et al. (see, Note 2), which proposes an East Indonesian origin for Polynesian migration. They talk about a ‘‘Polynesian motif’’ which they focused on in their research. The “motif” comprise a clade of mtDNA lineages that together account for >90% of Polynesian mtDNAs. Soares, et al. states that “for the last 15 years, it has been recognized that the age and distribution of this clade are key to resolving the issue of the peopling of Polynesia.”
They explain that “by analyzing 157 complete mtDNA genomes, they show that the motif itself most likely originated more than 6000 years ago (>6 ka) in the vicinity of the Bismarck Archipelago, [off the northeastern coast of New Guinea] and its immediate ancestor is older than 8000 years (>8 ka) and virtually restricted to Near Oceania (includes New Guinea, the Bismarck Archipelago, Bougainville, and the Solomon Islands). This indicates that Polynesian maternal lineages from Island Southeast Asia (Philippines, Indonesia, and Malaysian Borneo) gained a foothold in Near Oceania much earlier than dispersal from either Taiwan or Indonesia between 3000 and 4000 years ago (3–4 ka) would predict.
Their work shows that there was a spread back through New Guinea into ISEA, which most likely took place approximately between 4000 and 5000 years ago (~4–5 ka). A more plausible backdrop of the settlement of the Remote Pacific is a model based on the idea of a ‘‘voyaging corridor,’’ facilitating exchange between ISEA and Near Oceania (see map above).
The work further suggests “a convergence of archaeological and genetic evidence, as well as concordance between different lines of genetic evidence.” The authors state that their “results imply an early to mid-Holocene Near Oceanic ancestry for the Polynesian peoples, likely fertilized by small numbers of socially dominant Austronesian-speaking voyagers from ISEA in the Lapita formative period, approximately 3500 years ago (~3.5 ka)”. They claim that their “work can therefore also pave the way for new accounts of the spread of Austronesian languages.”
Now that we have a clearer understanding of the likely migration patterns and regional cultures that influenced the New Zealand Islands, we can look at salt production from the immediate cultural influence sphere to New Zealand. Our first subject is China and this is not an unwarranted staring place. Note the conclusions summarised in the 2010 lecture notes of Matisoo-Smith, L., and Denny, M. that around 1500 BP a new population arrived in Western Polynesia with new and more typically Asian derived physical characteristics, and mtDNA lineages. A strong link with China exist not only genetically, but also culturally. China, as probably the most advanced society from antiquity would have wielded an unavoidable influence on the region from the very start,
Salt production in China
During the imperial era of China between the third century B.C.E. to the early twentieth century A.D., we find that salt and iron monopolies in China often provided the bulk of the state revenue. “Historical accounts even suggest that inland salt sources may have played an important role in the unification of China by Qin in 221 B.C.E.. (Flad, et al.; 2005)
Flad, et al. (2005) demonstrated, using the latest research technology, that “salt production was the most significant activity at Zhongba during the first millennium B.C. E.” (Flad, et al.; 2005) Zhongba is located in the Zhong Xian County, Chongqing Municipality, approximately 200 km down-river along the Yangzi from Chongqing City in central China.
They furthermore conclude that “the homogeneity of the ceramic assemblage during Phases I and II suggests that salt production may already have been significant in this area throughout the second millennium B.C. The Zhongba data represent the oldest confirmed example of pottery-based salt production yet found in China. The first millennium B.C. dates alone confirm that salt production was established long before the Qin expansion into Sichuan in 316 B.C.” (Flad, et al.; 2005)
“In southern China, salt from Zhongba was a vital component in the complex process of state formation. For example, the specialized production of surpluses of salt, and possibly salted products, and the trade of these commodities to regions outside the Three Gorges (three adjacent gorges along the middle reaches of the Yangtze River, in the hinterland of the People’s Republic of China) stimulated contacts between the upper and middle reaches of the Yangzi River. As coastal and inland lake-salt sources provided this crucial resource to emerging states in the Central Plains and Eastern China during their formative periods in the late second and early first millennium B.C., so, too, did the salt sources in the Sichuan Basin provide this dietary supplement, preserving agent, and industrial component to the emerging polities in the south. Although the Three Gorges remained a relatively peripheral area into the first millennium B.C., the establishment of trade networks based in large part on the exchange of surplus salt brought some elite practices into the region and stimulated the emergence of social differentiation in the area as elites in nearby polities such as Chu engaged in gift-giving and related practices in attempts to create ever-larger networks of political influence. At the same time, salt from the Three Gorges facilitated the development of more complex economic systems in these same nearby polities by providing a resource that was unavailable elsewhere in the middle reaches of the Yangzi River drainage. Eventually, salt became crucial to the provisioning of armies by expansive states such as Qin and Chu, polities that controlled areas adjacent to the Three Gorges region, and the existing networks of salt exchange became catalysts to the incorporation of this area into a unified Chinese empire.” (Flad, et al.; 2005)
So, if we push the history of large-scale salt production in China back to the second millennium B.C.E. and we recognise the key importance of its trade in the region into the 20th century A.D., a picture emerges whereby salt was traded very likely into Polynesia, probably from some time after the 2nd millennium B.C.E., but definitely in the time of the Christian Era which is important to us since it seems that New Zealand was colonised by the Polynesians, around 700 years ago. The probability that salt, at this time, did not play a hugely important role in Polynesian culture is very unlikely and the first colonists of New Zealand would have been familiar with both salt itself and ancient production methods.
Salt production in Fiji
Solar-evaporation salt-works has been located on the Sigatoka Sand Dunes on the island of Viti Levu. Fiji. Here seawater was used “on large flanged clay dishes. This short-lived industry of the seventh century AD disappeared beneath the dunes, but its documented nineteenth- and twentieth-century successors offer it many useful analogies: the salt, now extracted by boiling brine, was supplied to inland communities upriver, where it functioned as a prime commodity for prestige and trade and an agent of social change.” (Salt Production at a Post-Lapita Village reporting on Burley, D. V.; 2011)
“The solar production site is dated to between 2100-900 years ago (BP), with cultural characteristics thought to have been influenced by contact with Vanuatu and New Caledonia.” (Salt Production at a Post-Lapita Village reporting on Burley, D. V.; 2011)
This is very significant since it utilizes seawater to produce salt. The late date is however of great interest and is dated just before the Polynesian colonization of New Zealand around 1300 A.D..
Williams, T. (1858) reports on salt from inland sources in Fiji. I assume these were very old sites. He also mentions salt regularly as items of trade which leads me to speculate that salt was part of Fijian society for a long time by 1858 when he wrote. Salt was, without doubt, part and parcel of Fijian culture by the time New Zealand was colonised by Polynesians.
Use of salt on Samoa
In Samoa, my attention is drawn, not to salt production but to an ancient reference to salt from one of their legends. In a variety of the legend of Sina and the eel, Sina’s mother went down to the sea to draw salt water for cooking. This is, in my opinion, probably one of the oldest forms of the use of salt and one that I am sure must have been known by all coastal dwellers. From such natural liquid brines, I suspect, salt as a condiment and salt for preservation developed. (Andersen, J. C.; 1928: 251)
Salt in New Guinea
One of the areas in Polynesia with the richest history of salt is undoubtedly New Guinea. A method used is burning salted plants and collecting the salt grains from the ashes and charcoal. Here, in “Papua (western part of New Guinea, Indonesia), the Western Dani conduct expeditions and live in temporary habitations built near (salt) springs, where they would work to produce large and hard salt cakes. After an agreement with the landowners (the Moni), who will furnish the necessary food against shells, fineries, pigs or axes, men will look in the forest for necessary raw material: young stems of porous edible plants (Elastostema macrophylla Brogn. from Urticaceae family) and trunks of peculiar trees which produce scant ashes and large charcoal after burning. After cleaning the spring pool, and reinforce the dam to prevent the inflow of fresh water from the nearby river, plants are soaked for more than a day and a night. While the plants are soaking in salty water, men go and collect vegetal material (leaves, bark, and rattan) to pack the salt, and clean the flat terrace in front of the houses, in order to install the woodpile where salted plant will be burnt.
Plants are taken out of the pool, and put together near the woodpile during the following night, after the night rains. The slow and controlled combustion of the plants lasts for seven hours. The flames are blown-out with brine. In the early morning, during long hours, from amongst the ashes and charcoal, men will carefully sort out the little salt concentrations in the shape of the hollows of the plants. Collected in a great wooden plate, these concentrations are piled and riddled with a portage net, and the charcoal rejected down the terrace.
The salt and ashes are powder are placed on long pandanus leaves in a rectangular frame limited with thin little boards held vertically with little pegs. Mixed with brine, the paste is compressed and packed down in the mould before the leaves are folded. The salt cakes will be carefully dripped and dried during the more than a week above the fireplace until it becomes a hard and compact “stone salt”, resistant to dampness and long-distance transport.
The obtained salt is a light-gray product, rich in sodium chloride and having very few impurities. New Guinea had a sophisticated form of salt extraction.
Salt in Vanuatu
Vanuatu is a South Pacific Ocean nation made up of roughly 80 islands. “Archaeological evidence supports the theory that people speaking Austronesian languages first came to the islands about 3,300 years ago”. (Bedford, et al, 2008) “Pottery fragments have been found dating to 1300–1100 BC.” Here, the Sago Palm (Metroxylon) has been used as a source of salt from antiquity.
Jean-Michel Dupuyoo (2007) writes that “Sago palms of the genus Metroxylon, is a potential source of salt; or more accurately, vegetable salt. Certain parts of the plant, mainly leaves and petioles, produce ashes rich in salt, which is separated from the ash with water. This saline solution is then used both for seasoning food and the preparation of sauces. Some traditional societies in the center of Espiritu Santo still use these ashes.” “Some other species, such as banana trees (Musa spp.) and tree ferns (Cyathea spp.) are also used in the extraction of vegetable salt.” (Dupuyoo, 2007)
Dupuyoo then makes a startling revelation. He writes, “according to my correspondents, this practice was at one time their only method of obtaining salt, as access to the sea was often forbidden in times of local warfare.” (Dupuyoo, 2007) This correlates to the practice eluded to the Samoan legend of Sina of boiling food in seawater to obtain the salt. It is something I have long suspected as the first addition of salt to food and the origins of discovering its preserving power as meat was often stored in water in ancient times as one of the earliest forms of preservation.
This raises an interesting observation. From my studies of the diets of ancient people from southern Africa, I discovered that a vegetarian diet from the vegetation in this region will not supply you with the necessary daily sodium requirement. This, however, only applies to certain plants, nuts, berries, and fruits. Some of them are high in sodium and they are endemic to certain places in the world where it will then be possible to maintain an adequate sodium intake without the consumption of any meat or milk.
Healthline reports that “we should aim for less than 1500 mg of sodium per day, and definitely not more than 2300 mg. Keep in mind that salt contains both sodium and chloride. Only 40% of the weight of salt consists of sodium, so you can actually eat 2.5 times more salt than sodium. 1500 mg of sodium amounts to 0.75 teaspoons or 3.75 grams of salt per day, while 2300 mg amounts to one teaspoon or 6 grams of salt per day.”
“According to the U.S. Department of Agriculture, a mammy apple contains the most sodium per serving. One of these large round tropical fruits contains 127 milligrams of sodium.” If we consume 11 of these apples, per day, we will take sufficient sodium in. “Guavas and passion fruit are the only other fruits in the raw form that contain 50 milligrams of sodium or more per serving.” This means we have to eat at least 30 per day to get enough sodium.
One of the leading anthropologists on this region is undoubtedly Joël Bonnemaison whose work stretch from 1960 until his untimely death in 1997. In his work, he covered the archipelago and regional groupings and identities in Maewo, Ambae and, Pentecost in the north, in central Vanuatu, and especially in his classic study of Tanna society. He lists salt and fish as two of the commodities traded between coastal and inland communities which existed pre-European contact. (Haberkorn, G., 1992, quoting Bonnemaison)
Salt was undoubtedly part of popular culture in Vanuatu presumably long before the August 1774 contact with Europeans.
Salt in Taiwan
Like New Zealand, no rock salt deposits exist in Taiwan. Yet, a stela with the inscriptions made by a Yuan Dynasty (1271 – 1368) official, bearing instructions for the construction of salt fields in Wuzhou, Kinmen was found in Taiwu Mountain in Kinmen. (atc.archives.gov.tw/salt)
There are records of Europeans, Chinese and Japanese coming to Taiwan as early as the second half of the 16th century, for either transferring commodities to the third countries or for trading with the Taiwanese aborigines with agate, cloth, salt, copper, etc. for buckskin (Nakayama, 1959, 24-25). The method of production was presumably based on boiling sea water until salt only is left. This assumption is from the fact that “in the mid-seventeenth century, Cheng Cheng-kung, or Koxinga as he is commonly known, retreated to Taiwan after the fall of the Ming dynasty (around 1644). Chen Yung-hua, one of his generals, disliked the taste of decocted salt which is produced by boiling sea water until nothing is left but a salt residue. Instead, he preferred salt produced using the solar evaporation method. In 1665 he had salt pans constructed at today’s Laikou, located in Tainan County in southern Taiwan.” (Taiwan Today, 1991)
“During the Ching dynasty, six more saltworks were developed. When Liu Ming-chuan was governor of Taiwan in the late nineteenth century, he also served as salt supervisor for the province and established a government salt bureau in Taipei, with a branch in Tainan. Despite his bureaucratic innovations, the island was only able to produce 25,000 tons of salt annually, not enough for local consumption, so additional amounts were imported from the mainland.” (Taiwan Today, 1991)
In the 17th century, there are references from literature that Taiwan barter traded goods like sulfur, deer hides, and gold for salt, fabrics, and iron with the outside world. (Huang, Fu–san; 2005) This is consistent with the fact that local salt production was too low to supply the local demand.
Conclusions about the Lack of Salt Production in New Zealand
Implications to the NZ question from lessons learned in Fiji:
Important conclusions can be made based on an analysis of the salt production site in Fiji. Burley, et al. (2010) drew parallels between population growth and the establishment of the salt processing site in Fiji. A large population requires such specialization and large salt production is a logical step for large population growth. Unlike in Fiji, it is unlikely that the Māori population ever reached the numbers that would necessitate salt production. Seafood was sufficiently available in New Zealand for the numbers of mostly coastal dwelling Māori’s. The abundant seafood and other meat would have been such a good source of salt that nothing else would have been required. Examples from Africa shows that people who relied on meat had no need for salt and in reality showed little interest in its production. Salt production then seems to be a function of access to meat, relative proximity to the coast and population size.
Burley et al. (2010) also report on a well-established trade network in Fiji. They write that the “Sigatoka River continued to provide a principal corridor into interior highland communities, and is later well-documented as a route for coastal/interior trade. Historical references by Tonganivalu (1917: 9) and Williams (1858:94) speciﬁcally highlight salt as a component of this exchange, leading Tanner (1996: 234) to claim salt as a resource both prized and essential.” I have to investigate but doubt if such well-established trade corridors ever existed in New Zealand due to the young nature of the population.
Implications to the NZ question from lessons learned in Samoa:
The mention of cooking food in salt seawater from the legend of Sina and the eel in Somoa leads me to suspect that it was practiced by all ocean dwelling communities, including the Māori. Why would it not have been if it was known and practiced since antiquity? Its mention from the Samoan legend strengthened my suspicion of this ancient nature of this practice and the likelihood that the Māori was well aware of salt and its properties and chose not to mine salt from the ocean, rather than being ignorant of salinity or the technology of mining salt.
Implications to the NZ question from lessons learned in Vanuatu:
Like in Samoa, food was boiled in salt water.
Implications to the NZ question from lessons learned in Taiwan
The boiling off of salt crystals would be associated with the formation of inland communities and seems to have been a progression of the “boil meat in seawater” practice. Since no large inland Māori community ever developed, the need never arose to create salt that is tradable to such inland communities.
The preeminence of China in shaping salt extraction technology cannot be doubted. Salt production by boiling seawater must be ancient in Taiwan, the rest of the region and indeed, around the world. The references of it in Polynesia and Asia offers a suggested progression from the boiling of food in seawater. From our look at ancient fermentation and meat storage before fire and cooking became part of food preparation, sea water (any salt water for that matter) would have been particularly effective for long-term meat storage. As fire became widely used for meal preparation, it would have been natural to boil the meat in the exact liquid (sea/ salt water) it has been stored in very successfully for millennia. (How did Ancient Humans Preserve Food?)
It would have added to the foods taste and probably the primary reason being preservation. The data from Fiji shows that population size, even located by the sea, is a key function of the development of salt extraction technology. Inland communities had the added problem by its removal from seawater.
It would be my guess that migrants from Taiwan would have spread their technology throughout the lands of Polynesia. The boiling off of water to leave only the salt crystals would be associated with an increased population even by the sea and the formation of inland communities. Every evaluation of salt on the islands which we considered supports this. China would undoubtedly have been a key driver in the region in progressing salt extraction technology with Pappa New Guinea playing a large role with its own unique approach. Solar evaporation of seawater, extracting salt through plant material and burning plants, naturally high in salt are a few of the developments from the region, which all presumably have their roots in the practice of simply boiling seawater; in turn, this was probably a progression of the practice of cooking food in seawater; which, in turn had its roots in storing meat in saline solutions; which had its roots in simply immersing carcasses in bodies of water for storage. When we are at this point, we are clearly at the very early age of the existence of anatomically modern humans.
In a discussion with a curator from the Canterbury Museum about the matter of salt production and trade being absent from New Zealand ancient history, he drew my attention to the interesting practice of the Maori to slow boil large quantities of shellfish. Had they not done so, it would not have been possible to consume large quantities at a time. There seems to be evidence that they did, in fact, consume large quantities of this at a time. It supports the notion that they knew about salt. They probably knew at least some of the techniques of extracting it, but the local population never developed to the point where this was ever necessary. They definitely knew to remove some of the salt from shellfish before consuming it. (They have a word for salt which shows that they definitely knew about its taste).
Here the reference to the chickens becomes relevant. The Polynesian who populated New Zealand brought with them the Polynesian rats, hiding in the canoes, the Polynesian dog, and chickens. Chickens did not survive very long. The reason is simple. Nine species of large, flightless birds known as moas (Dinornithiformes) thrived in New Zealand when humans arrived. Some of them weighed over 200kg. Why bother with chickens if you have these kinds of ready meals available all around you? The Moa went extinct around 600 years ago and coincides with the arrival of the first humans.
The need for the Maori for extracting salt and trading it would have been the same. I doubt that they did not know at least some of the techniques for extracting salt from seawater, but why do it if there is no reason. Salt production is absent from pre-European times, not due to a lack of knowledge, but probably due to a lack of environmental pressure to engage in it.
I must still investigate the use of salt in Hawaii, Australia, India and the rest of Polynesia to complete this work. The full spectrum of every extraction technique used in China and Japan must be understood and listed and the likely mechanisms of influence in New Guinea must be analysed. I do however believe we have enough here to start drawing these firm conclusions.
Implications about the origin of nitrite/ nitrate curing
This study of salt also brings me back to my work on nitrite/ nitrate curing which has been a major focus for me over many years. While people living in desert areas would have discovered that certain salts have the ability to change the colour of meat from brown, back to pinkish/ reddish, along with increased preservation power and a slightly distinct taste, it is certainly true that coastal dwellers would have observed the same. They would have noticed that seasalt or bay salt have the same ability.
Dr. Francois Mellett, a renown South African food scientist, sent me the following very interesting theory about the earliest discovery of the curing process in a private communication between us on the matter. He wrote, “I have a theory that curing started even earlier by early seafarers: when a protein is placed in seawater, the surface amino acids are de-aminated to form nitrite for a period of 4 to 6 weeks. Nitrite is then converted to nitrate over the next 4 weeks. Finally, ammonia and ammoniac are formed from nitrate. It is possible that they preserved meat in seawater barrels and that the whole process of curing was discovered accidentally.”
I think he is on the right track. I suspect that people discovered this even long before barrels were invented. The use of seawater for meat storage and further preparation was so widespread that it would have been impossible not to have noticed meat curing taking place. If it is generally true that earliest humans first settled around coastal locations before migrating inland, it could push the discovery of curing many thousands of years earlier than we ever imagined, to a time when modern humans started spreading around the globe. When did it develop into an art or a trade is another question altogether, but I think we can safely push the time when it was noticed back to the earliest cognitive and cultured humans whom we would have recognized as thinking “like us” if we could travel back in time and meet them. I think the question of recognition in different regions we can safely put at the time when these areas were populated. The story of salt and meat curing is truly a story as old as cognitive and cultured humanity itself.
The journey remains fascinating!
1. Extracts from the Matisoo-Smith, L. and Denny, M. (2010) lecture notes.
Likely migration patters into Polynesia
“When looking at human settlement of the Pacific, anthropologists divide the Pacific into two regions namely Near Oceania, which was settled by humans by 30,000 BP and remote Oceania, which was not settled until around 3000 BP.” (Matisoo-Smith, L. and Denny, M.;2010)
“The first human settlers of Remote Oceania are associated with the Lapita culture, which first appeared in the Bismarck Archipelago in Near Oceania around 3500 BP. (An archipelago is a chain or cluster of islands formed from volcanic activity).”(Matisoo-Smith, L. and Denny, M.;2010)
“The Lapita culture is named after the distinctive patterned pottery, which was first found at a site called Lapita in New Caledonia. Anthropologists are very interested in who the Lapita people were and what role they played in the settlement of the Pacific.” (Matisoo-Smith, L. and Denny, M.;2010)
“Remnants of Lapita pottery are now found throughout many areas of Remote Oceania, which suggests that the Lapita people were the first to settle this area. The age of the pottery remains found in each area supports the idea that this settlement spread from west to east from Melanesia into Polynesia.” (Matisoo-Smith, L. and Denny, M.;2010)
“Evidence such as this suggests that the Lapita people are the ancestors of modern Pacific peoples, but questions remain about whether there could also have been contributions from other populations from Asia and Micronesia at later times.” (Matisoo-Smith, L. and Denny, M.;2010)
The first study of Matisoo-Smith and Denny (2010) “looked at the variation in the mitochondrial DNA (mtDNA) of living populations of Pacific rats from islands around the Pacific. mtDNA is inherited only from the mother, therefore there is no mixing with the father’s DNA or recombination during meiosis. This means that differences in the mtDNA due to mutation can be traced back through the generations. Scientists use the variation in the mtDNA to work out the relationships between different populations.” (Matisoo-Smith, L. and Denny, M.;2010)
“The results of this study suggested that it is highly likely that there were multiple introductions of the Pacific rat to the Pacific Islands. This raised the question, “did these introductions all occur at the same time or at different times?” If they were at different times then this suggests that another group of people migrated into the Pacific sometime after the Lapita people.” (Matisoo-Smith, L. and Denny, M.;2010)
“This question cannot be answered by studying modern mtDNA, as variation in modern mtDNA only shows different origins,—it doesn’t show the timing. Ancient DNA, however, could be used to answer this question. Ancient DNA is any DNA extracted from tissues such as bone that are not fresh or preserved for DNA extraction later. When an organism dies, the DNA molecules immediately start to break down, which makes it difficult to extract good quality DNA for analysis. The hot and wet environment found in most of the Pacific makes it just about the worst area for DNA preservation. Despite this Lisa and other Allan Wilson Centre researchers have been able to obtain DNA from Pacific samples as old as 3000—4000 years.” (Matisoo-Smith, L. and Denny, M.;2010)
“If the age of the remains is known then the likely date of the introduction of new genetic material can be estimated. The team next investigated ancient DNA from the remains of Kiore (Pacific rat) found in different archaeological sites around the Pacific looking for patterns in the haplotypes in mtDNA. A haplotype is a combination of alleles that are located closely together.
Lisa found three distinct groups of haplotypes, – shown as Groups I, II and III in Figure 7.
“Three clearly different haplotypes (or genetic groups) is an indication that these populations of rats are likely to have quite different ancestral origins. Group III does not fit the expected pattern. It shows no genetic link with the haplotypes found in Near Oceania. This suggests that this haplotype may be the result of a later introduction of the Pacific Rat into Polynesia sometime after the Lapita introduction.” (Matisoo-Smith, L. and Denny, M.;2010)
“To test this hypothesis Lisa and her team carried out similar studies of variation in both modern and ancient mtDNA in pigs and chickens. In both of these animals the results showed there are introductions that are consistent in geographic distribution and time of appearance in the archaeological record with a Lapita introduction. But other mtDNA studies on dogs of the Pacific, plus the rat and chicken data all indicate a second introduction. This suggests a second population migration out of Asia sometime after 2000 BP.” (Matisoo-Smith, L. and Denny, M.;2010)
“These results have led Lisa and her colleagues to suggest a new model for Polynesian origins. It is based on an existing framework for Lapita origins suggested by Roger Green in 1991. Here are the key ideas:
1. The Lapita colonists in West Polynesia and the rest of Remote Oceania look very much like the current indigenous populations of Vanuatu, New Caledonia and western Fiji
2. Around 1500 BP a new population arrived in Western Polynesia with new and more typically Asian derived physical characteristics, and mtDNA lineages.
3. These new people also introduced new mtDNA lineages of commensal rats, dogs and chickens.
4. There was intense and complex interactions with the existing Lapita-descended populations as they spread over West Polynesia.
5. This resulted in the formation of the Ancestral Polynesian culture, who then dispersed east, and north into the rest of Polynesia.
This possible scenario is shown in the figure below. The grey arrows show the initial Lapita expansion through Near Oceania and into Remote Oceania. The dotted arrows show the proposed arrival of new population (or populations) from Asia into West Polynesia. The black arrows show the settlement of East Polynesia and a back migration into Melanesia.
2. Extracts from a 2011 study by Soares, et al., proposing an East Indonesian origin for Polynesia and discounting the “Out of Taiwone model
A 2011 study by Soares, et al., proposes an East Indonesian origin. They talk about a ‘‘Polynesian motif.’’ The “motif” and its descendants comprise a clade of mtDNA lineages that together account for >90% of Polynesian mtDNAs. Soares, et al. states that “for the last 15 years, it has been recognized that the age and distribution of this clade are key to resolving the issue of the peopling of Polynesia.”
They explain that “by analyzing 157 complete mtDNA genomes, they show that the motif itself most likely originated more than 6000 years ago (>6 ka) in the vicinity of the Bismarck Archipelago, [off the northeastern coast of New Guinea] and its immediate ancestor is older than 8000 years (>8 ka) and virtually restricted to Near Oceania (includes New Guinea, the Bismarck Archipelago, Bougainville, and the Solomon Islands). This indicates that Polynesian maternal lineages from Island Southeast Asia (Philippines, Indonesia, and Malaysian Borneo) gained a foothold in Near Oceania much earlier than dispersal from either Taiwan or Indonesia between 3000 and 4000 years ago (3–4 ka) would predict.
Their work shows that there was a spread back through New Guinea into ISEA, which most likely took place approximately between 4000 and 5000 years ago (~4–5 ka). A more plausible backdrop of the settlement of the Remote Pacific is a model based on the idea of a ‘‘voyaging corridor,’’ facilitating exchange between ISEA and Near Oceania (see map above).
How did the cultural markers and the linguistic similarities between these regions and that of Taiwan develop? Soares, et al. suggests that there is evidence of further small-scale bidirectional movements across this region when Austronesian-speaking voyagers integrated with coastal-dwelling groups in the Bismarcks, perhaps stimulating the rise and spread of the Lapita culture and the dispersal of the Oceanic languages. “Other lineages from Southeast Asia are also found at low frequencies in Near Oceania, and still, others are candidates for dispersal from Taiwan into eastern Indonesia via the Philippines, but they did not reach Oceania. Some of these may have also been involved in the transmission of Austronesian culture and languages, although they evidently had no demic role in the founding of Polynesia.
Thus, although the results of the Soares, et al. study “rule out any substantial maternal ancestry in Taiwan for Polynesians, they do not preclude an Austronesian linguistic dispersal from Taiwan to Oceania between 3000-4000 years ago (3–4 ka), mediated by social networks rather than directly by people of Taiwanese ancestry but perhaps involving small numbers of migrants at various times.”
“The mtDNA patterns point to the possibility of a staged series of dispersals of small numbers of Austronesian speakers, each followed by a period of extensive acculturation and language shift. Overall, though, the mtDNA evidence highlights a deeper and more complex history of two-way maritime interaction between ISEA and Near Oceania than is evident from most previous accounts. Archaeological and linguistic evidence for maritime interaction between ISEA and Near Oceania during the early and mid-Holocene is strengthening, however, and it has been suggested that contacts might have been facilitated by sea-level rises and improvements in conditions on the north coast of
New Guinea. Early to mid-Holocene social networks between New Guinea and the Bismarck Archipelago are marked by the spread of stone mortars and pestles,
obsidian, and stemmed obsidian tools from approximately 8000 years ago (~8 ka) until
before or alongside the advent of Lapita pottery in the Bismarcks at around 3500 years ago (~3.5 ka). The absence of early Lapita pottery on New Guinea suggests major disruptions to preexisting exchange networks within Near Oceania before or at approximately 3500 years ago (~3.5 ka), with increasing social isolation of some areas and increasing interaction between others.”
“There is also emerging evidence from both archaeology and archaeobotany for the spread of domesticates during the mid-Holocene, before the presumed advent of Austronesian dominance from approximately 4000 years ago (~4 ka). Molecular analyses suggest that bananas, sago, greater yam, and sugarcane all underwent early domestication in the New Guinea region. These cultivars and associated cultivation practices diffused westward into ISEA, where the plants and linguistic terms for them were adopted by Proto-Malayo-Polynesian speakers upon their arrival approximately 4000 years ago (~4 ka). The vegetative cultivation of these plants evidently occurred within ISEA before any Taiwanese influences became significant.”
The work suggests “a convergence of archaeological and genetic evidence, as well as concordance between different lines of genetic evidence.” The authors state that their “results imply an early to mid-Holocene Near Oceanic ancestry for the Polynesian peoples, likely fertilized by small numbers of socially dominant Austronesian-speaking voyagers from ISEA in the Lapita formative period, approximately 3500 years ago (~3.5 ka)”. They claim that their “work can therefore also pave the way for new accounts of the spread of Austronesian languages.”
Andersen, J. C.. 1928. Myths and Legends of the Polynesians. Dover Publications.
“Background Note: Vanuatu”. US Department of State. Archived from the original on 13 May 2008.
Bedford, Stuart; Spriggs, Matthew (2008). “Northern Vanuatu as a Pacific Crossroads: The Archaeology of Discovery, Interaction, and the Emergence of the “Ethnographic Present””.Asian Perspectives. UP Hawaii. 47 (1): 95–120. JSTOR 42928734
Flad, R., Zhu, J., Wang, C., Chen, P., von Falkenhausen, L., Sun, Z., & Li, S. (2005). Archaeological and chemical evidence for early salt production in China. Proceedings of the National Academy of Sciences of the United States of America, 102(35), 12618–12622. http://doi.org/10.1073/pnas.0502985102
Jean-Michel Dupuyoo, 2007, Notes on the Uses of Metroxylon in Vanuatu, Jardin d’Oiseaux Tropicaux Conservatoire, Biologique Tropical, 83250 La Londe-les-Maures, France, Metroxylon in Vanuatu Vol. 51(1) 2007, PALMS 51(1): 31–38, email@example.com
Huang, Fu–san (2005), A Brief History of Taiwan: A Sparrow Transformed into a Phoenix, Taipei: Government Information Office.
Soares, P., Rito, T., Trejaut, J., Mormina, M., Hill, C.,Tinkler-Hundal, E., Braid, M., Clarke, D. J., Loo, J-H., Thomson, N., Denham, T., Donohue, M., Macaulay, V., Lin, M., Oppenheimer, S., Richards, M. B.; 2011. Ancient Voyaging and Polynesian Origins, AJHG, Volume 88, Issue 2, p239 – 247, 11 February 2011.
Taiwan Today, Publication Date: December 01, 1991; The Last Salt Farmers; https://taiwantoday.tw/news.php?unit=12,29,33,45&post=22441
Williams, T. 1858. Fiji and the Fijians. London: Alexander Heyland.
Searching for Salt in New Zealand
By Eben van Tonder
7 July 2018
The southern coast of Africa – a unique place where human ghosts as old as 80 000 years walk the beaches. Minette and I got engaged here, celebrating those most ancient inhabitants on top of Table Mountain.
We chose a land where human ghosts only appeared around 1000 years ago to get married. The south island of New Zealand. Until the arrival of Polynesian colonists, who became the Māori people, the land didn’t know the footsteps of humans.
Even after the first colonists arrived on the South Island of New Zealand, they only moved through the Cheviot Hills and on its beaches very occasionally as nomadic hunters 730 years ago. Their main seat of occupation being the Kaikoura Coast, the Cheviot coast, including Manuka Bay where we got married was less preferred for hunting and fishing. This makes the area one of the oldest permanently uninhabited places on earth. A fitting place to celebrate our union which we never saw as a celebration of humanity but rather nature. (Wilson; 1993)
When another group of colonist arrived recently in the form of Europeans, they thought the land to be completely uninhabited. Allen Giles wrote of his early years on Mount Parnassus in 1890 that the “Virgin South Island produced a feeling of “frightful loneliness.” He described it as “a brand new land… untouched by the ghosts of men and their traditions. There appeared never to have been men. All was clean, pure and emotionless; unsullied by man’s occupation.” (Wilson; 1993)
Hints of what the Cheviot area looked like before the fires of the Polynesians resulted in the replacement of forests with grasslands and scrubs have been discovered in Treasure Downs. The discovery happened in 1986 when a farmer discovered moa bones on his farm in the hills east of Cheviot township. Moa is the giant flightless bird, endemic to New Zealand, hunted into extinction by the Maori and by 1440 the extinction was complete. (Perry; 2014) What was revealed through an official archeological dig is that there once was a small, deep lake in a natural basin in limestone hills. The lake had a peaty margin, fed by underground springs. About 5000 years ago the dominant species had been matai (a black pine, endemic New Zealand), pokaka (a native forest tree of New Zealand), manuka, and flax and fern. Well preserved moa bones were also found in the former lake. (Wilson; 1993)
The area became a perfect reference point for my search for the ancient history of salt in New Zealand. It unlocks the history of its people and turns out to be more fascinating than I could have imagined. Not in their use of salt, but its complete absence!
The Hurunui River Mouth – A Food Gathering Station
Close to Manuka Bay is the Hurunui river mouth. Duff identified it as the location of a Māori food-gathering station. Other artifacts found at the river mouth were a number of adze-heads. They were made from baked argillite originating from the Nelson are and their shape identified them as from the moa-hunter period, six to eight centuries ago. In 1946, a farmer plowed up a forty-eight kg block of obsidian on his farm at the river mouth. The block was used to make flake tools, even though most of these tools discovered at the river mouth were of flint rather than obsidian.
Manuka Beach – a stopover location
On Manuka Beach, Māori ovens and artifacts have been found. (Wilson; 1993) These ovens are found throughout the region and Nick Harris reports that there are Maori ovens on his farm in the area. These earth ovens were called hāngī or umu. Hāngī sizes varied depending on what was cooked – joints from moa and seals required large ovens, whereas fish or kūmara (sweet potato) could be cooked in smaller ovens. (Teara.govt.nz) These earth ovens were basically a pit, dug in the ground. Stones were heated in the pit with a large fire and baskets were placed on top of the stones. Everything was covered with earth for several hours before uncovering. Exact cooking times and pit design varied depending on what must be cooked and is in use till this day. The origin of the technology is Polynesian. (Ministry for Primary Industries, May 2013 and Genuine Maori Cuisine, 2012)
Ange Montgomery pointed out that there are karaka trees planted in the Cheviot area. The tree is native to the north island and its seeds were planted by the Māori at stop over places as food source. Another clear indication locals using the area during migration and other movements. Apart from its fruit, this fascinating tree was used as a bait tree. It attracted other animals to feast on its fruits which in turn was caught for food.
“Karaka kernel is highly toxic. Under the orange skin of the fruit is an edible pulp. The danger lurks in the kernel or stone of the fruit which contains the toxic alkaloid karakin.
The pulpy flesh can be eaten and to this day people harvest the berries and enjoy them. Some even use the flesh to make an alcoholic karaka drink.
The Maori used to use the poisonous kernels as well. They used a special method to prepare the kernels which include soaking, boiling and soaking again as well as cooking in a hangi for 24 hours.” (stuff.co.nz). Ange points out how amazing it is that people were able to work this kind of thing out. The power of observation and careful analysis of the natural world by ancients never ceases to amaze me in a rushed world where we have largely lost this ability!
Where is the salt? (New Zealand Meat Preservation)
It is fascinating that no salt industries existed on either one of the two islands before the Europeans arrived. In light of the point just made about the karaka tree, this is not a trifle fact to escape our notice. No rock salt exists anywhere in New Zealand, but there are salt marshes and the technology to extract salt from these by burning the plants that grow in them is known from New Guinea, another Polynesian settlement in the region. Why this was not done in New Zealand is a question to be answered.
In New Zealand, food was preserved amongst other, using fat. There is a story related to Lake Grassmere or Kapara-te-hau as the Maori’s call it. There is an account of the great chief, Te Rauparahara coming from the north “to take ducks to preserve in fat for winter food.” (theprow.org.nz)
The Māori preserved meat through smoking, sun drying, potting in fat and chilling by dropping containers with meat into water. Sweet potatoes were stored in underground pits, but whether they used these pits for meat is something I do knot know. Mutton birds were placed in inflated kelp and preserved in their own fat. Folded bark from the totata tree was used as containers to store meat, being preserved in fat. (Canterbury Museum)
Added salt would have been part of the diet of Māoris at the coast from sea water when they ate seafood. When they lived inland, no salt would have been added to their diet. Their source of sodium would have been that which is in the meat itself. This means that their diet was somewhat similar to the San and Khoikhoi of Southern Africa who also did not use salt, but there is evidence that they were ocasionally exposed to salt traders from the north.
On the menu for the New Zealand adventure:
1. Build an earth pit and cook food.
2. Preserve meat using fat.
3. Eating of the karaka tree and fermenting it into an alcoholic drink. Always a worthwhile endeavor! 🙂
4. Understanding why salt was never extracted.
The fact that salt was not part of the Māori diet is fascinating! How well the technology was entrenched by the early Polynesian colonists is to be determined. How well was cooking part of the Māori diet?
The quest of a lifetime!!
Canterbury Museum exhibition.
Perry, George L.W.; Wheeler, Andrew B.; Wood, Jamie R.; Wilmshurst, Janet M. (2014-12-01). “A high-precision chronology for the rapid extinction of New Zealand moa (Aves, Dinornithiformes)”. Quaternary Science Reviews. 105: 126–135. Bibcode:2014QSRv..105..126P. doi:10.1016/j.quascirev.2014.09.025. Retrieved 2014-12-22.
Ministry for Primary Industries (May 2013). He whakatairanga i nga ahuatanga mahi mo te tunu hangi – Food Safety practices in preparing and cooking a hangi (PDF). Wellington: New Zealand Government. ISBN 978-0-478-41430-1. Retrieved 6 October 2014.
“The New Zealand Maori Hangi: Foods, Preparations and Methods Used”. Genuine Maori Cuisine. Epuro Hands International Limited. 2005. Retrieved October 2, 2012.
An Introduction to the Total Work on Salt, Saltpeter and Sal Ammoniac – Salt before the Agriculture Revolution
by: Eben van Tonder
As a meat curing professional, my trade revolves around salt, spices, wood smoking, and drying and its application to meat in order to create exceptional dishes. Despite not being a trained historian or archeologist, the question intrigues me how humans came to use it to preserve food and use it in foods as a condiment.
The story of salt is one of the most fascinating tales of food science. Having studied sodium chloride, the various nitrate, and ammonia salts for many years, I intend putting it all together in more or less chronological fashion to set out for myself An Introduction to the Total Work on Salt, Saltpeter and Sal Ammoniac. Here, I begin the work by looking at Salt before the Agriculture Revolution.
I am fortunate to have many friends from around the world who are well known and respected scientists related to various disciplines I touch on in these articles and I invite them to share their insights and make corrections in the many places where this will be necessary.
THE EARLIEST SALTING OF MEAT
The first experience of humans with cooked meat probably came when they scavenged meat that was burned by wildfires. This would have been before we discovered how to make fire. Such meat would have been salted by the ashes and the charcoal would have added an interesting flavour. By eating these burned carcasses, the first experience of humans with roasted and cooked meat would have included salting which takes the earliest inclusion of salt in meat, even though not deliberately done, to a time before language even developed. There is evidence that cooking or roasting food did not become the universal way that humans consumed meat, even long after fire was invented, (see How did Ancient Humans Preserve Food?) nor did adding extra salt to meat become a universal practice till very recently.
It is estimated that in the “5–7 million-year period since the evolutionary emergence of hominins (bipedal primates within the taxonomic tribe hominini; note that the newer term hominin supplants the previous term, hominid) ≥20 species may have existed. Similar to historically studied hunter-gatherers, there would have been no single universal diet consumed by all extinct hominin species. Rather, diets would have varied by geographic locale, climate, and specific ecologic niche.” (Cordain, L., et al.; 2005)
How likely was salt to have played a large part in their diets? What conditions could have sparked the use of salt as a preservative and as a condiment? Let’s see how far we can unravel the mystery.
SALT IS REQUIRED FOR SURVIVAL
Looking into the prehistoric past to try and unravel the mystery behind our use of salt, we consider our biological need for salt. Meat, blood, and milk contain far more salt than many plants. Nomads who subsisted on their flocks and herds or hunters who regularly ate meat did not need additional sources of salt. Agriculturalists or nomads who for any number of reasons did not eat meat required supplementary sources of salt.
Lack of sodium is life-threatening. “Sodium is critical for determining membrane potentials in excitable cells and participates in various metabolic reactions in the body. An adequate intake of sodium is required for optimal growth. Rats maintained on low sodium diets exhibit decreased bone and muscle weights, and required a daily intake of 300 μEq Na+ for normal growth of fat, bone, and muscle tissues. In a study conducted by Bursey and Watson “sodium restriction during gestation in rats increased the number of stillborn pups, led to smaller brain size and amount of protein per unit of wet brain tissue, and decreased total brain RNA.” Severe sodium restriction may negatively affect glucose metabolism and disturb normal blood viscosity. Distribution of intracellular and extracellular fluid volumes are dictated by sodium, and either a deficit or excess of sodium will alter overall fluid balance and distribution. Under normal circumstances, deviations from optimal body fluid homeostasis are corrected primarily by the kidneys, and proper renal handling of sodium is necessary for normal cardiovascular function. We can say that “survival and normal mammalian development are dependent on adequate sodium intake and retention” (Morris, M. J., et al., 2008)
“The minimum sodium requirement for humans is arguable, but it is clear that the average daily intake in developed countries far exceeds what is needed for survival. The worldwide average salt intake per individual is approximately 10 g/day, which is greater than the FDA recommended intake by about 4 g, and may exceed what is actually necessary by more than 8g.” (Morris, M. J., et al., 2008) A lack of sodium intake causes the onset of hyponatremia, a condition associated with sodium levels not being adequate in blood. It is too low due to either too much water or not enough sodium intake. This condition is characterized by nausea and vomiting, headaches, confusion, loss of energy, fatigue, restlessness, irritability and muscle weakness, spasms or cramps.
EXAMPLES OF HYPONATREMIA FROM PRIMITIVE SOCIETY
David Livingston describes that he often saw conditions in the early 1800’s on his travels in Africa, where poor people were forced to live on a vegetarian diet alone and as a result of this developed indigestion. His comment came in the context of a reference to the Bakwains, part of the Bechuana people, who allowed rich and poor to eat from the meat hunted. He mentions that the doctors knew what the cause of the indignation was and that it was related to a lack of salt intake. (Hyde, A., et al.; 1876: 150)
It is fascinating that Livingston describes that at two occasions later in his life, he was himself deprived of salt for months and yet, he did not have any cravings for it (Hyde, A., et al.; 1876: 150). Interestingly enough, he reported cravings for meat and milk which he knew had enough salt to cure the onset of symptoms associated with a low salt diet.
DO WE NATURALLY CRAVE SALT IF SODIUM LEVELS ARE TOO LOW?
We have seen that we need salt, but do we know that we need it? Do we feel “sodium deprived” and intuitively seek out salt? We have four or possibly five taste sensors in our mouths. Off the five, one is wholly dedicated to tasting the sodium ion, the charged atom responsible for the love of salt. Vegetarian/ herbivore and omnivore animals are similarly equipped. Interestingly enough, in a study on rats, it was shown that some of them naturally recognize salt deficiency as the cause for their hyponatremia. Others had to be taught through experience. Studies have shown and described how long-term changes in the brain as a result of hyponatremia may be behind an increased appetite for salt in animals. There is in other words, a biological reason for animals to be “directed” to salt. (De Luca Jr, L. A., Menani, J. V., Johnson, A. K. (Editors), 2014: 4) They either crave salt when in a sodium deficient condition naturally and in some cases, it is clear that they developed the craving.
If we naturally craved salt, it would explain our love for salt and the fact that it is so dominant in our diets. People would have naturally sought out salt deposits to amend their diets. The facts, however, is that the salt appetite of humans does not fit the biological model. There are great similarities between humans and other animals in how we handle sodium, but also very important differences. The sodium ion is essential for both humans and animals and we both have special sensors dedicated to its detection. Humans and animals share the same physiological systems that regulate it in the body, both ingest far too much of it and both show that a lack of sodium immediately following birth enhances the love of it. But unlike animals, people do not enjoy pure salt. Humans don’t like it in water while it has been shown that some rats prefer it. Importantly, humans do not respond to sodium deficiency by craving for it and it never becomes a learned craving after an incident of hyponatremia. (De Luca Jr, L. A., Menani, J. V., Johnson, A. K. (Editors), 2014: 5)
Animals who have been deprived of salt increase their salt intake robustly. Studies in rats showed that if they have been deprived of it once, they permanently increase consumption of it but not so in humans. The dedicated sodium receptors in humans do not direct us to it when there is a deficiency in our bodies. There are records of humans dying from hyponatremia with salt around them. There have been many studies in humans to try and prove the opposite, but in every case, results are inconclusive at best. The evidence is clear that unlike animals, humans will seek sodium to satisfy our pallet but not to save our lives. (De Luca Jr, L. A., Menani, J. V., Johnson, A. K. (Editors), 2014: 5)
In humans, there is no satisfactory current explanation for the prominence of our sodium taste receptors or “for the powerful influence it exerts on our predilection for salt as the prime condiment and food additive that gives taste and tang to our food and is of no nutritional necessity.” (De Luca Jr, L. A., Menani, J. V., Johnson, A. K. (Editors), 2014: 5) The question comes up, why not? There must obviously have been a time in our pre-history where we did not need this or when having it, would have been a disadvantage.
Is it that our diets in prehistoric times were varied enough and contained enough meat that we did not need to “crave” salt. What did human nutrition look like during the stone age and in particular, 100 000 years ago when we had clear evidence of cognitive, cultured humans in southern Africa. We can push the date even further back. Ben Panko, writing for smithsonian.com reported on June 8, 2017, about the work of Jean-Jacques Hublin, an anthropologist at the Max Planck Institute for Evolutionary Anthropology who studied a fossil from a cave in central Morocco. The results of their analysis of the bones revealed that humans had lived there roughly 300,000 years ago,” a 100 000 years earlier than we previously thought. Hublin “suggest that, by 300,000 years ago, modern humans had already spread across Africa.” (Smithsonianmag)
What did those early humans look like and what changed over the 300 000 years?
“Using advanced imaging technology to 3D scan and measure the recovered skulls, the researchers were able to create full facial reconstructions, showing a striking similarity to the appearance of humans today. “Their face is the face of people you could meet in the street now,” Hublin told the Financial Times. “Wearing a hat they would be indistinguishable to us.” The hat would be necessary because the major noticeable difference between these Homo sapiens and us is a differently shaped head, caused by a brain that was as large as ours, but longer and less round. Rounder brains are a major feature of modern humans, though scientists still can’t say exactly how it changed the way we think. “The story of our species in the last 300,000 years is mostly the evolution of our brain,” Hublin says. (Smithsonianmag)
This is very important because nothing seriously could have changed over the last 10 000 years since animals have been domesticated and we began practicing agriculture, but over the last 300 000 years, a lot changed related to our brains. Evolutionary adaptations could not have taken place since agriculture was invented to compensate for the modern food we eat but since the emergence of homo sapiens, much has changed in our make-up.
WHAT WAS THE STONE AGE DIET?
There are several ways that we can look back into our ancient past and try and unravel what our food looked like.
John D. Speth (2017) makes a compelling case for the earliest meat humans ate to have been putrid and fermented. Another method of identifying what we are in the stone age is suggested by Cordain et al (2005) with their concept of evolutionary discordance theory. According to them, contemporary chronic diseases and health issues are partially, if not largely, due to an evolutionary “clashing” with new patterns introduced in our modern world after the agricultural revolution approximately 10 000 years ago when agriculture and animal husbandry was developed.
Cordain, et al, explains it as follows. They say that “contemporary humans are genetically adapted to the environment of their ancestors—that is, to the environment that their ancestors survived in and that consequently conditioned their genetic makeup. There is growing awareness that the profound environmental changes (eg, in diet and other lifestyle conditions) that began with the introduction of agriculture and animal husbandry ≈10 000 y ago occurred too recently on an evolutionary time-scale for the human genome to adapt. In conjunction with this discordance between our ancient, genetically determined biology and the nutritional, cultural, and activity patterns in contemporary Western populations, many of the so-called diseases of civilization have emerged.
An example is Cordain et al’s case is that refined sugar consumption increased since 500 BC and high-fructose corn syrup since the 1970s which may have caused discordance. “Lacking evidence directly associated with hominin diets, it is left unknown how simple sugars may have actually shaped evolution of hominins.
The data, however, on ape diets suggests a fruitarian ancestry governed by plants. Although the sugars of these fruits are evidenced to have been accompanied by diverse dietary fiber sources, nutritional variations may have occurred not unlike refined sugars and large amounts of fructose. It is also unclear why fructose, heavily associated with diabetes, should be prevalent in the main foods of a hominin ancestral diet.
Science must ultimately make up perceptions on a factual matter regarding nutrition and medicine where historical and archeological evidence fall short and can only present clues.
Double-blind, randomized cross-over designed trials on each discordance—cereals, refined sugars, refined vegetable oils, alcohol, salt, fatty domestic meats, etc.—and how differing amounts affect health must be researched for proper nutritional determinations.
For example, two interventions over a year’s time could be performed in which one group could be given wild-caught salmon and deer meat and the placebo group would receive farmed salmon and deer meat. Blood lipids and abdominal fat stores can be measured throughout the year.
Because of possible interplay from each discordance that should not be discounted, double-blind randomized cross-over trials should also include versions of whole, supposed Paleolithic diets.
Each study performed, in turn, may also offer revelations into evolutionary past. And perhaps, to make things more interesting, the studies should also be performed on bonobos and chimpanzees.” (David Despain. Evolvinghealthscience)
What we can say for sure from the work of Speth (2017) and Cordain et al. (2005), is that before the development of agriculture and animal husbandry, hominin dietary choices would have been necessarily limited to minimally processed, wild plant and animal foods. If the meat was not fresh, it would have been putrid and fermented and minimally cooked or warmed. Salt could have played a role in hominin and more particularly, early homo sapien diets if we too had a natural craving for salt like other animals which we lost. If this was the case, however, one would expect to see evidence of salt mining around the time when we know for sure that cognitive, cultured humans existed. Such a time and location is 100 000 years ago in southern Africa. Despite vast natural salt resources in salt pans and salt springs in the region, there is no evidence that any of this was ever mined or that communities sprang up around the salt resources to exploit it till well into the 1800’s.
Cordain et al. (2005) state it as follows. “It is likely that Paleolithic (the old stone age which began 2.6 million years ago and ended 10 000–12 000 y ago) or Holocene (10 000 y ago to the present) hunter-gatherers living in coastal areas may have dipped food in seawater or used dried seawater salt in a manner similar to nearly all Polynesian societies at the time of European contact. However, the inland living Maori of New Zealand lost the salt habit, and the most recently studied inland hunter-gatherers add no or little salt to their food on a daily basis. Furthermore, there is no evidence that Paleolithic people undertook salt extraction or took interest in inland salt deposits. Collectively, this evidence suggests that the high salt consumption (≈10 g/d) in Western societies has minimal or no evolutionary precedent in hominin species before the Neolithic period.” (Cordain, L. et al; 2005)
BUILDING UP A SUBSTANTIAL BODY OF KNOWLEDGE ABOUT SALT FROM VERY EARLY
Humans would not have naturally gravitated towards additional sodium intake besides what is found as a constituent of our food and the remedy to hyponatremia. What possible scenarios could have existed that lead to humans “discovering” its nutritional value and as that it acts as a preservative. It is easy to imagine how early humans would have seen animals licking salt and how they would have mimicked the behaviour even in the absence of a natural craving for sodium. The fact that salt cures hyponatremia and its link to an exclusively vegetarian diet is also something that early humans would have discovered. One can then imagine how vegetarians would have developed it and people who eat meat did not. It would have been clear. It is also easy to see how salt could have been ingested by some of the people who only relied on a plant-based diet and the symptoms would have disappeared.
I focus on what happened 100 000 years ago in Africa since it is here where we find the oldest evidence of cognitive and cultured human beings dating to this time in southern Africa. Whether the link with salt was made 100 000 years ago did not depend on the cognitive ability for people to recognize it, but the question should be that by what date would one encounter people with a vegetable diet only; where in the world was it most likely to find such societies and how likely would it have been for them to have access to salt and to have discovered that salt resolves hyponatremia. This relates to the nutritional aspect of salt.
On the other hand is the questions “when” and “where” was it most likely to have discovered the preserving power of salt in meat and by extension, the power of sal ammoniac and saltpeter salt in meat preservation. What conditions would favour these discoveries which will help us identify the “where” and “when”. I venture some guesses. There can be little doubt that putrified meat and fermentation predates salt preservation. (How did Ancient Humans Preserve Food?) Ancients would have noticed the preserving power of simply submerging the carcass in water and storing it there for future consumption. They would have observed this from animals which drowned in bodies of water. There is little doubt that such carcasses, retrieved from salt water would have been different in terms of taste and preservation.
Interesting components are coming together in such a scenario which surely would have lead to the discovery of its preserving powers by simple observation namely salt water (brine), dead animals, noticing its state of decomposition relative to animals retrieved from freshwater and cognitive, cultured humans who were able to make these links from events separated by time and space. We know that all these were present in southern Africa from at least as early as 100 000 years ago which means we can speculate that some knowledge of salts could have started to come into human culture by this time. As individuals started to observe this, it was now up to the modes of dissemination and its speed by which it was related from group to group that would have become important. It is interesting that there is no evidence of salt mining in southern Africa till much later.
Similar to animals drowning in water and carcasses deliberately being stored under water as one of the oldest forms of meat preservation, is the fact that animals which died in desert areas where the wind blew salt-sand onto the carcass must have equally been subjected to a different rate of decomposition compared to freshly killed animals, untreated by salt or water immersion. It is this which I believed played a crucial role in discovering nitrate curing in the Turfan area where, by the early Bronze age, bodies were subjected to natural mummification, partly as a result of nitrate-rich sand blowing over them. Thinking about it makes it clear that examples of natural salt preservation must have been all around early humans to observe. The question is really, which hominin species was able to make the connection cognitively and in the case of southern Africa, why was it never developed any further. It is safe to say that salt preservation never played any part in the indigenous cultures in southern Africa.
There must have been a considerable time between the discovery of the value of salt and when it became part of popular culture. Ons of the big reasons for this was availability. We can identify when this happened by identifying when mining of salt emerged. Mining it would necessarily have been preceded by discovering its value which then created a demand and which, in turn, lead to its mining.
The next article will feature some stone-age chemistry as we look at the analytical techniques that were required to start separating out different kinds of salt. We also look at the oldest salt mines on earth to start forming a picture of where cultures emerged, based on this unique mineral.
Cordain, L., Eaton, S. B., Sebastian, A., Mann, N., Lindeberg, S., Watkins, B. A., O’Keefe, J. H., Brand-Miller, J.; Origins and evolution of the Western diet: health implications for the 21st century, The American Journal of Clinical Nutrition, Volume 81, Issue 2, 1 February 2005, Pages 341–354, https://doi.org/10.1093/ajcn.81.2.341
De Luca Jr, L. A., Menani, J. V., Johnson, A. K. (Editors). 2014. Neurobiology of Body Fluid Homeostasis: Transduction and Integration. CRC Press.
Engelbrecht, J. A.. 1936. The Korana. Maskew Miller, Ltd.. Cape Town.
Needham, J. 1980. Science and Civilisation in China, Volume 5. Chemistry and Chemical Technology. Cambridge University Press.
Schlebusch CM, Malmström H, Günther T, Sjödin P, Coutinho A, Edlund H, Munters AR, Vicente M, Steyn M, Soodyall H, Lombard M, Jakobsson M. 2017. Southern African ancient genomes estimate modern human divergence to 350,000 to 260,000 years ago. Science.2017 11 03; 358(6363):652-655
Honey – powerful preserving and healing power (Fascinating Insights from Arabia)
Eben van Tonder
17 May 2018
The link between meat preservation technology and embalming is by this time well established and I have written extensively on the subject. The use of honey in meat formulations and meat curing is no exception and we are introduced to the topic of honey in the preservation of human remains by the 16th-century Chinese medical doctor and pharmacologist, Li Shizhen, reporting on a practice from Arabia.
Three aspects emerge. We have seen in our article, How did Ancient Humans Preserve Food? that the ancients were not grossed out by human excrement or urine. Here we see that they were likewise not revolted by the use of human body parts. The second is the use of honey in the preservation of these body parts. This becomes another example where the same preserving technology is in use to preserve meat and human bodies alike or, as in the particular case, body parts.
The third interesting aspect is the level of technological development in the use of honey. In the minds of the ancients, it would seem, honey had a preeminent place among the different preservation technologies in that they recognized that it had not only great value for the dead but also healing power for the living. This is a tremendous realization in a day when sodium chloride salt, nitrate, nitrite, sulfate, and ammonium are being highlighted for their detrimental effects to the lives of the living if not used with extreme care. In Li Shizehn’s account, there is an amazing mix of the healing and preservation power of honey, blended in with ancient belief’s and legends about the essence of life and the human body. The account may be in its composition as handed down by Li, mostly fictional, but there are undoubtedly enough points of contact with reality to be of huge interest. The mere fact that it was romanticised to the level of Li’s account, points to fascinating underlying truths that can be seen through the fog of the legend. Lets first look at the man who brings us this fascinating story.
“Li Shizhen was a highly influential figure in Chinese medicine and the author of the revered text Bencao Gangmu (Great Compendium of Herbs). The Bencao Gangmu is one of the most frequently mentioned books in the Chinese herbal tradition, rivaled only by the Shanghan Lun. Li Shizhen’s image (see Figures 1 and 2) is to be found at every traditional medical college in China and in any illustrated book about the history of Chinese medicine. Li Shizhen was the subject of a 1956 Chinese movie about his life and accomplishments. The modern kung-fu actor Jet Li described Li Shizhen as the person he most looks up to. There is a Li Shizhen award given to doctors and researchers who make valuable contributions to traditional Chinese Medicine. He is further given recognition in the labeling of herb products and there is even a Li Shizhen brand of herbs. One can say that in the pantheon of the greatest scholars of traditional China, Li Shizhen is the last towering figure to be recognized and, by virtue of that position, the main scholar who has been worthy of emulation ever since.” (LI SHIZHEN by Dharmananda, S.) Li Shizen will later become a key figure in understanding the Chinese view of salt, but more of that later.
The concoction described by Li was created by steeping a human cadaver in honey after the dying person in a way “gave himself” to the process, while still living. It is mentioned only in Chinese medical sources, most significantly by Li Shizhen in his Bencao Gangmu.
“Relying on a second-hand account, Li reports a story that some elderly men in Arabia, nearing the end of their lives, would submit themselves to a process of mummification in honey to create a healing confection.”
“This process differed from a simple body donation because of the aspect of self-sacrifice; the mellification process would ideally start before death. The donor would stop eating any food other than honey, going as far as to bathe in the substance. Shortly, his feces (and even his sweat, according to legend) would consist of honey. When this diet finally proved fatal, the donor’s body would be placed in a stone coffin filled with honey.
After a century or so, the contents would have turned into a sort of confection reputedly capable of healing broken limbs and other ailments. This confection would then be sold in street markets as a hard to find item with a hefty price.
The first record of mellified man is found in Li Shizhen’s 1596 classic Chinese pharmacopeia Bencao Gangmu (section 52, “Man as medicine”) under the entry for munaiyi (木乃伊 “mummy”). Li quotes the c. 1366 Chuogeng lu (輟耕錄 “Talks while the Plough is Resting”) by the Yuan dynasty scholar Tao Zongyi (陶宗儀) or Tao Jiucheng (陶九成).
According to [Tao Jiucheng] in his [Chuogenglu], in the lands of the Arabs there are men 70 or 80 years old who are willing to give their bodies to save others. Such a one takes no more food or drink, only bathing and eating a little honey, till after a month his excreta are nothing but honey; then death ensues. His compatriots place the body to macerate in a stone coffin full of honey, with an inscription giving the year and month of burial. After a hundred years the seals are removed and the confection so formed used for the treatment of wounds and fractures of the body and limbs—only a small amount taken internally is needed for cure. Although it is scarce in those parts the common people call it “mellified man” [miren 蜜人], or, in their foreign speech, “mu-nai-i”. Thus Mr. [Tao], but I myself do not know whether the tale is true or not. In any case I append it for the consideration of the learned.
According to the historians of Chinese science Joseph Needham and Lu Gwei-djen, this content was Arabic, but the story got mixed up with a Burmese custom of preserving the bodies of abbots and high monks in honey, so that “the Western notion of a drug made from perdurable human flesh was combined with the characteristic Buddhist motif of self-sacrifice for others”. In her book Stiff: The Curious Lives of Human Cadavers, writer Mary Roach observes that Li Shizhen “is careful to point out that he does not know for certain whether the mellified man story is true.” (Mellified Man)
“Li calls the concoction miren (蜜人), translated as “honey person” or “mellified man”. Miziren (蜜漬人 “honey-saturated person”) is a modern synonym. The place it comes from is tianfangguo (天方國 “divine square [Kaaba] countries”), an old name for Arabia or the Middle East”). The Chinese munaiyi (木乃伊), along with “mummy” loanwords in many languages, derives through Arabic mūmīya (mummy) from Persian mūm “wax”.
Mellification is a mostly obsolete term for the production of honey, or the process of honeying something, from the Latin mellificāre (“to make honey”), or mel (“honey”).” (Mellified Man)
Physical properties of honey
“Honey has been used in funerary practices in many different cultures. Burmese priests have the custom of preserving their chief abbots in coffins full of honey. Its reputation both for medicinal uses and durability is long established.” There are stories from the Arabian peninsula of deceased children of wealthy families being preserved in sealed jars of honey. The preserving properties of honey are well established and have been recognized and used for its medicinal value for millennia. I discuss it in detail in my article, Honey in cured meat formulations
“Antibacterial properties of honey are the result of the low water activity causing osmosis, hydrogen peroxide effect, and high acidity. The combination of high acidity, hygroscopic, and antibacterial effects have led to honey’s reputation as a plausible way to mummify a human cadaver.” (Mellified Man)
Alexander the Great allegedly ordered that his body is embalmed with honey, and upon his death in Babylon in 323 BC, he was presumably placed in a golden coffin filled with the purest of white honey and taken back to Macedonia. (Aufderheide, A. C.; 2003: 45)
Another famous figure, similarly preserved with honey is “King Edward I of England, who died in 1307, was found to have hands and a face that was remarkably well preserved due to having been coated with a layer of wax and honey.” (Brent Swancer)
Similar medicine practices
“Both European and Chinese pharmacopeias employed medicines of human origin such as urine therapy, or even other medicinal uses for breast milk. In her book, Roach says the medicinal use of mummies, and the sale of fake ones, is “well documented” in chemistry books of 16th to 18th centuries in Europe, “but nowhere outside Arabia were the corpses volunteers””. (Mellified Man)
“Mummies were a common ingredient in the Middle Ages until at least the eighteenth century, and not only as medicine but as fertilizers and even as paint. The use of corpses and body parts as medicine goes far back—in the Roman Empire the blood of dead gladiators was used as treatment for epilepsy.” (Mellified Man)
“In his book, Bernard Read suggests a connection between the European medieval practices and those of the Middle East and China:
The underlying theories which sustained the use of human remedies, find a great deal in common between the Arabs as represented by Avicenna, and China through the [Bencao]. Body humors, vital air, the circulations, and numerous things are more clearly understood if an extended study be made of Avicenna or the Europeans who based their writings on Arabic medicine. The various uses given in many cases common throughout the civilized world, [Nicholas] Lemery also recommended woman’s milk for inflamed eyes, feces were applied to sores, and the human skull, brain, blood, nails and “all the parts of man”, were used in sixteenth-century Europe.”” (Mellified Man)
If one studies every single reference to the use of honey to preserve corpses on its own, one could find problems and alternative interpretations to most of the references. What is clear though, is that honey was used in embalming and the preservation of corpses. From Arabia, there is enough evidence that it was practiced in some form. So then, it is easy to conclude that a body somewhere was preserved in honey and that the coffin was sealed and re-opened a long time afterward. Whether it was exactly 100 years later or not is not the issue. Number of years were often used in antiquity to indicate long time spans without the need to take it literally. The 100 years could easily refer to simply a long time.
In line with the widespread use of mummies in remedies, it is easy to see how such a mummy could have been opened and the content removed and reworked into a remedy that was sold, consistent with what was done with other mummies. Now, link with this, the fact that honey is a known cure. To this day there are people who swear by the fact that one teaspoon of honey stay off colds and flue and relieves other ailments. It is easy to see that the same was true since antiquity given the inherent healing qualities of honey. Who else but 60 or 70-year-old men would have been prime consumers of such a remedy and it could just as easily been such a practice that was linked to the honey preserved mummies. The set of events that I describe may just as well be the basis of Li’s story.
The story of old men commencing the honey-soaked embalming process while still alive, sacrificing themselves to become medicine, 100 years later is romantic, disturbing and appealing, all at the same time. Whichever way you take it, the story highlights the value of honey as a preservative and a medicine which makes it ideal for cured meat preparations.
Aufderheide, A. C.. 2003. The Scientific Study of Mummies. Cambridge University Press.
How did Ancient Humans Preserve Food?
By Eben van Tonder
3 April 2018
I have been studying the origins of curing technology for many years. I realised that in order to properly understand the technology and its origins, I will gain the best insights by starting at the very beginning and ask the question, what came before salt preservation of meat (whether that is sodium chloride, sodium or potassium or calcium nitrate or ammonium chloride)? Humans must have stored food from very early in our development. How did we do it? What techniques did we use even before fire was widely used for cooking or roasting?
Many years ago the animals were slaughtered on the beach in Cape Town at the bottom of Adderly street. This was nothing new for the time since abattoirs were often erected close to or next to bodies of water to “carry away” the blood and offal. What was interesting about the Cape Town picture of that time was the meeting of the Dutch with their insistence on very straight lines against the lives of the indigenous people living at the Cape with their, well, no insistence on straight lines whatsoever. An eyewitness recalled the picture. Around the slaughtering area, some indigenous people would sit, waiting for the animals evisceration, the removal of viscera or the internal organs. These they knew the Dutch, and for that matter, no Europeans would eat it. After it fell to the ground, the butcher would pick it up and throw it to the natives. The natives ate everything of the animal.
I never imagined that these amazing words picture from the last half of the 1600’s at the Cape of Good Hope would in later years not only illustrated the difference in Europeans and Africans perceptions of food to me but form an amazing link back into the early Paleolithic period of our ancestors who roamed the earth hundreds of thousands of years ago.
In his article, he argues for the deliberate “fermented (often literally rotted or putrefied) meat, fish, fat, and stomach contents” (Speth, J. D.. 2017) from the Paleolithic records in particular the Neanderthals and Upper Paleolithic peoples which roughly covers the time period between 50,000 and 10,000 years ago. In particular, people living in arctic and subarctic conditions. The importance of this study to our investigation into food preservation technology from pre-history is self-evident. Besides this, from a microbiological standpoint, we are interested in his assessment of the impact, if any, “the repeated consumption of millions upon millions of bacteria and their complex metabolites, together with the normal postmortem biochemical products of endogenous muscle and lipid decomposition, might have had on the carbon and nitrogen stable isotope values of Neanderthals and Upper Paleolithic peoples.” (Speth, J. D.. 2017)
This article is important to me in that it fills in important gaps towards a better understanding of meat preservation technology. There is evidence that a detailed understanding of sodium chloride, sodium or potassium or calcium nitrate salts and ammonium chloride (sal ammoniac) existed at the time of the Mesopotamian civilization, as early as 3000 BCE. I have long wondered how people stored meat and other produce before the effect of salt on meat was discovered; let alone nitrates and ammonia. I have wondered about the relationship between Neanderthal and salt since like our human ancestors, the Neanderthal must surely have followed game to salt pans or salt springs and must have been aware that animals consume salt from time to time and must have become aware of its benefits from very early on.
Besides this, when salt became expensive and a much sought after commodity; when salt served as a currency, I am sure that demand and supply of salt and therefore its value took on a life of its own that did not always follow a strict utilitarian value as is the case today with gold or to a lesser extent, platinum. Even more, surely there would have been extensive experimentation with salt in every aspect of its application in the preparation and preservation of different foods by salt producers in order to broaden its utilitarian value as was the case when soya was first proposed as an alternative source of protein and its inclusion in meat recipes was the result of extensive experimentation during the the 1970 and 80’s. All this means that salt’s use in food is not as intuitive as one may assume.
Storage of food, including meat, must have been practiced since the dawn of humanity and I was certain of nothing except that humans did not start food preservation with salt. This article offers the first credible predecessor for salt preservation of food, something that only became universally practiced deep into the 1900’s. It was, for most of human history, not a widely practiced preservation technique. Even Europe, China and the Middle East relatively recently started using salt to preserve and to cure meat. The rest of the world really only embraced it in very recent times as part of Western culture.
Fermentation as a “low tech” alternative to cooking
The intriguing predecessor of salt preservation of produce, proposed by Speth is deliberate fermentation/ putrification of meat for the purpose of storage in order that it can be consumed later.
He begins by listing the benefits of fermentation by identifying it as a low-tech alternative to cooking. “First and foremost”, Speth says, “fermentation (including more advanced states of putrefaction) of meat and fish accomplishes outside of the body much of what would normally happen to these foods in their unfermented state inside the body after one has ingested them.” If one looks at the processes of fermentation and putrification carefully, you realize that it “produce many of the same benefits that cooking does, but without the need for fire or fuel.”
The Benefits of LAB Fermentation
Speth points out that lactic acid bacteria (LAB), a key participant in fermentation, produce “a wide range of enzymes, toxins, and other metabolites that inhibit invasion by unwanted pathogens.” One such organism is the boogie man of food science, Clostridium botulinum. This ability of LAB “to block the proliferation of pathogens provides arctic and subarctic foragers with an extremely effective ‘low-tech’ way of preserving and storing meat and fat for months — even through the warmer months of the year — in environments where the weather often was too damp and rainy to dry these foods effectively, and especially in environmental contexts where fuel shortages may have precluded the routine use of fire to speed up the drying process. The food was often simply placed for weeks or months in pits in the ground, or under piles of rocks, or within specially made seal-skin ‘pokes’, or submerged in bogs, rivers, or shallow ponds.” (Speth, J. D.. 2017)
He notes that this technique is something that has been documented around the world. “For example, 17th-century Dutch colonists observed Khoisan hunter-gatherers (‘strand looper’ Bushmen) along the Namibian and South African coast scavenging meat and blubber from stranded whales and storing it in pits along the shore for later use (Budack1977; Raven Hart 1971; see also Cawthorn 1997 for similar practices among Maori of New Zealand and Darwin 1860: 213–214 for comparable treatment of beached whales in the high-latitude environments of Tierra del Fuego). A Native American war party taking a group of captives from farms in western Pennsylvania to the Niagara frontier area in western New York in 1780 apparently did not hesitate to eat a putrid and maggot-infested deer (or elk) they killed en route (Walton 1790: 103–104). And as Frank Marlowe (2004b: 84) notes, even the well-known Hadza in Tanzania were not averse to utilizing putrid meat: ‘…the Hadza often eat very rotten, week-old meat they scavenge from carnivores.” (Speth, J. D.. 2017)
This preserving effect of LAB’s I am of course well familiar from salami, but I have never before considered their full benefit.
Speth states that “the preservative effects of LAB fermentation also are invaluable in preventing fats from becoming rancid. For arctic and subarctic peoples subsisting on diets that were composed almost entirely of animal foods, the large quantities of fatty meat and fish that such a diet demands can be very difficult to dry quickly enough and thoroughly enough to prevent the lipids, most especially the long-chain polyunsaturated fatty acids (LCPUFA’s), from turning rancid and spoiling (see the discussion in Romanoff 1992). Such spoilage can actually pose a health risk by giving rise to a number of undesirable and potentially toxic substances in the meat or fish. The most important of these are a class of compounds known as hydroperoxides, unstable oxidation products that can undergo further breakdown, forming a variety of carbonyl group compounds such as aldehydes and ketones (Kubow 1990; St. Angelo 1992). These same processes can also lead to destruction of important vitamins in the fish or meat, particularly vitamin C, but also vitamins A and E (Flick et al. 1992: 184).”
“Fermentation provides an effective means of inhibiting the ‘autoxidation’ of the lipids that leads to rancidity. When decomposition first begins, the microflora in the carcass at the time of death is predominantly composed of aerobic taxa. These bacteria deplete the available oxygen, rapidly transforming the aerobic environment in the carcass (and meat) toward one favoring fermentative anaerobic taxa (Finley et al. 2015: 628; Forbes and Carter 2016: 19; Hyde et al. 2013: 7). For the very same reason, fermentation may be one of the most effective ways to preserve and store the lipid-rich brains of both fish and mammals, because the two principal LCPUFA’s in brain, docosahexaenoic acid (DHA) and arachidonic acid (AA), are very unstable and quickly turn rancid, even when refrigerated.” (Speth, J. D.. 2017)
“Finally, LAB fermentation creates important B-complex vitamins, most notably vitamin B12, riboflavin, and folate; and, by reducing or obviating the need for cooking, whether by roasting or by boiling, and by retarding the autoxidation of lipids, fermentation favors preservation of these and other vitamins that might otherwise be diminished or lost (de Moreno de LeBlanc et al. 2015).” (Speth, J. D.. 2017)
The search for Vitamin C or the antiscorbutic function as it was called, was frantic during the 1800’s and early 1900’s. The picture I was presented was that a meat diet equals a Vitamin C deficient diet. Speth focuses in on Vitamin C and presents very important facts. “Early Euroamerican explorers in the Arctic were plagued by scurvy stemming from shortages of vitamin C, while their unacculturated Native hosts were not (Fediuk et al. 2002; Stefansson 1935). The problem often arose, not from what the Westerners were eating — they were often relying on the same animals as their hosts were — but from differences in what parts of the animals they ate and how they prepared the food. The outsiders typically preferred muscle meat over entrails and organs, and they generally wanted it cooked (preferably roasted or grilled), not raw or frozen, and cooked thoroughly, especially if the meat was ‘tainted’ (as it often was on long outings in the ‘bush’). Coastal Inuit and inland Natives, on the other hand, ate virtually every part of the animal, including all of the internal organs, blood, testicles, foetus, amniotic fluid, intestines, chyme, brain, and eyes; and, they typically ate these either raw (sometimes still warm straight from the animal, sometimes partly frozen), or fermented (in fact, often thoroughly putrefied).” (Speth, J. D.. 2017)
“As it turns out, muscle is virtually devoid of vitamin C, regardless of how it is prepared, whereas many of the organs and body fluids that most Western visitors found disgusting and assiduously avoided (except when their expeditions were teetering on the brink of starvation) are precisely the portions of the animal that have the highest vitamin C levels (especially the brain, liver, spleen, and testicles, but also the thymus, pancreas, eyes (retina), adrenal glands, pituitary gland, and to a lesser extent the kidneys, heart, and lungs; see Clemens and Tóth 2016; Fediuk et al. 2002: 227, Table 1; Harrison and May 2009; Hediger 2002: 445; Jayathilakan et al. 2012: 281; Kizlaitis et al. 1962; NUTTAB 2010 Electronic Database; O’Dea 1991: 236; Pearson and Gillett 1999: 42). Curiously and perhaps counterintuitively, the stomach contents, at least of caribou, turn out to be a rather poor source of this important micronutrient (Draper 1978: 310; Fediuk 2000: 54).” (Speth, J. D.. 2017)
“In any case, regardless of whether one ate the full array of body parts and fluids separately while still raw, or consumed them mixed together into a thoroughly putrefied mass, there was enough of this precious vitamin to remain healthy. Moreover, as already demonstrated early in the 20thcentury, one can live indefinitely on a diet containing no fruits or vegetables and no fish or other aquatic foods and still obtain sufficient vitamin C to stave off any symptoms of vitamin C deficiency or scurvy, so long as the animal foods that are eaten are either raw (i.e., fresh, frozen, or putrefied)or only very lightly cooked (Bender 1979; Kizlaitis et al. 1964). Karen Harry and Liam Frink (2009: 334) provide a clear idea of what ‘lightly cooked’ typically meant in high-arctic contexts where fuel was scarce: ‘Modern-day informants report that traditional cooking techniques, still preferred by many today, involve only briefly immersing chunks of meat into hot water and removing them as soon as they have been warmed through or only lightly parboiled….’ (Speth, J. D.. 2017)
“In a classic year-long study, two seasoned arctic explorers, Vilhjalmur Stefansson and Karsten Andersen, lived for an entire year in New York City under close medical observation on a diet that consisted solely of (lightly cooked) beef, lamb, veal, pork, and chicken. The parts they ate included muscle, liver, kidney, brain, bone marrow, bacon, and fat. McClellan and DuBois (1930: 661–662), who supervised the medical component of the experiment, found no evidence of vitamin C deficiency. Stefansson (1935: 183), based both on the outcome of that study and on his years of first-hand experience in the Arctic, concluded that ‘…the human body needs only such a tiny bit of Vitamin C that if you have some fresh meat in your diet every day, and don’t overcook it, there will be enough C from that source alone to prevent scurvy.’ Similar observations were made some years earlier by William Thomas, a medical doctor who examined traditional Greenland Inuit still living on a diet consisting almost entirely of raw meat. Of particular interest are his comparisons between the Greenlanders and more acculturated Inuit living in Labrador: ‘This diet furnishes him [the Greenland Eskimo] with vitamins adequate for protection against scurvy and rickets, while the Labrador Eskimo, whose meat is cooked and whose diet includes many prepared, dried and canned articles, is very subject to both these maladies’ (Thomas 1927: 1560; see also Urquhart 1935: 195).” (Speth, J. D.. 2017)
“In fact, awareness of the antiscorbutic benefits of traditional northern diets is evident long before the discovery of vitamin C. One of the most explicit and insightful of these was by an 18th-century American M.D., John Aiken. Being unaware of micronutrients, he concluded that scurvy was not the inevitable outcome of an all-meat diet, as many of his contemporaries staunchly maintained, not even if the meat was putrid, but of the heavy use of salt as a preservative:
“In a manuscript French account of the islands lying between Kamtschatka and America…I find it mentioned, that ‘the Russians, in their hunting voyages to these islands, (an expedition generally lasting three years) in order to save expense and room in purchasing and stowing vegetable provision, compose half their crews of natives of Kamtschatka, because these people are able to preserve themselves from the scurvy with animal food only, by abstaining from the use of salt.’” (Aiken 1789: 346)
“…it seems to be a fact, that several of the northern nations, whose diet is extremely putrid, (as before hinted with respect to the people of Kamtschatka) are able to preserve themselves from the scurvy; therefore, putrid aliments alone will not necessarily induce it.” (Aiken 1789: 347)
“While it was known already by the mid-18th-century that fresh citrus fruits could both prevent and cure scurvy (Lind 1753), seafaring crews soon discovered that he concentrate made from the juice and used on long sea voyages lost much of its effectiveness as it was being prepared (Kodicek and Young 1969: 46). Thus, more stable alternatives to citrus were clearly needed. Not long after Lind’s discoveries, an interesting alternative was put forth by Charles de Mertans (1778). He found that fermented cabbage (i.e., sauerkraut) possessed an essence or substance that effectively staved off scurvy and, because it was fermented, it could be safely stored for months aboard ships without losing its antiscorbutic potency, thereby finally providing sailors with welcome relief from one of the true scourges of life at sea. De Mertans’s insights played a role in Captain James Cook’s decision to rely heavily on sauerkraut (‘sour-krout’) as an antiscorbutic during his long voyages of exploration in the Pacific, because, in Cook’s own words, it had ‘…the good quality not to loose any part of its Efficacy by Keeping, we used the last of it in September last after having been above two years on board & it was then as good as at the first’ (quoted in Kodicek and Young [1969: 48]; see also Holzapfel et al. [2003: 348, 350, 353] for contemporary information on sauerkraut’s vitamin C content; and Lamb  for an interesting overview of the discovery of scurvy’s cause and ultimate cure).” (Speth, J. D.. 2017)
“Stretching things a bit, one might say that northern foragers, when they first began deliberately fermenting meat and fish, had come upon an animal equivalent to Captain Cook’s ‘sour-krout’—an easily prepared, long-lasting antiscorbutic that made life possible in a cold and barren world that was largely devoid of fresh fruits and vegetables. Indulging now in a bit of speculation, if Neanderthal tastes were broadly similar to what we observe cross-culturally among northern foragers of the ethnographic present, such that they had no qualms about consuming the organs, entrails, and body fluids of the animals they procured, and if fermentation (and putrefaction) were normal parts of their culinary repertoire, then, despite their apparent very limited use of marine mammals and fish, it seems rather doubtful that they would have suffered to any significant degree from vitamin C deficiency or scurvy (contra Guil-Guerrero 2017).” (Speth, J. D.. 2017)
FERMENTED VERSUS RAW MEAT AND FISH
“He mentions one other point before he turns to a closer look at fermentation and putrefaction. Northern peoples eat quite a bit of their meat and fish raw, sometimes frozen (though usually deliberately partially thawed to achieve a crystalline but not flaccid texture), sometimes dried (occasionally also smoked), sometimes still warm from a freshly killed carcass. In short, raw and fermented/putrefied foods are not components of mutually exclusive dietary systems. Quite the contrary, both are widely used across the width and breadth of the circumpolar regions. And they share a number of valuable features in common. Being uncooked, neither requires much if any use of scarce fuel. And being uncooked, both preserve their precious vitamin C content. And, of course, raw meat and fish can be either dried or frozen and stored much like their fermented counterparts, although it would seem from ethnohistoric and ethnographic descriptions that the steps needed to prepare the raw food for storage, especially by drying, required a greater investment of time and labor, and the outcome was subject to greater risk of spoilage from inclement weather.” (Speth, J. D.. 2017)
“Where raw and fermented foods are most likely to part company, however, is at the point of consumption, digestion, and assimilation—the costs to the forager in time and calories of ingesting (chewing) and metabolizing them (i.e., the diet-induced thermogenesis of the foods). Fermentation, especially when allowed to reach the more advanced states of decomposition or putrefaction, is in many ways akin to cooking the food. Like cooking, fermentation greatly softens the meat and breaks down the component proteins and fats before the food is ingested. Thus, for the high-protein diets so characteristic of northern environments, fermentation and more advanced putrefaction may offer foragers a significant energetic benefit, while at the same time requiring little or no use of fire and fuel.” (Speth, J. D.. 2017)
“Inuit sled dogs provide additional insights into the potential energetic payoffs of consuming food in a partly pre-digested or putrefied state. During the winter, the season when sled dogs endure their heaviest work loads, their average daily caloric needs are estimated to be on the order of 5,000–6,000 kcal (Gerth et al. 2010; Olesen 2014: 233; Orr 1966), a staggering amount when one considers that these energy needs, and the food that has to be procured to fulfill them, are as great as, and at times even greater than, the needs of their human owners (see, for example, Spencer 1959: 142). As succinctly put by Frank Vallee (1962: 39): ‘The average [Inuit] family which had kept one or two dogs before the trapping era would likely have kept from three to six during that era. From the point of view of food consumption, this would be roughly equivalent to adding a few human members to each family in the region.’ To help further contextualize these figures, non-working Alaskan Huskies (average weight 33.4kg or about 74lb), while exposed to arctic winter temperatures with minimal shelter, consumed on average about 2,600 kcal per day (Durrer and Hannon 1962). At the other extreme, Alaskan sled dogs bred for racing, such as those which participate in the famous Iditarod, consume some 10,000 to 11,000 kcal per day (Hinchcliff et al. 1997; Kronfeld 1973; Loftus et al. 2014).” (Speth, J. D.. 2017)
“Thus, under traditional conditions, and most particularly in winter, sled dogs were a double-edged sword for their owners, on the one hand playing an essential role in the foragers’ mobile hunting way of life, but on the other hand requiring a huge and near constant investment of time, effort, and resources just to keep them fed (Anderson et al. 1998: 18; Balikci 1989: 115; 2002: 258; Ekblaw 1928: 3; Hall 1978: 210–211; Krupnik 1993: 54; Leechman 1954: 17; Pike 1892: 55, 136; Prichard and Gathorne-Hardy 1911: 32; Rasmussen 1930: 15, 36; Savishinsky 1975: 474; Smith 1991: 128; Spencer 1959: 142; Tyrrell 1897: 25; Vallee 1962: 38–39; Whitney 1910: 90). As Don Dumond (1980: 41–42) put it: ‘Burdened with their growing array of specialized and necessary tools, the Eskimos were driven to invent first the umiak and then the dogsled, having to make payment for the latter ever after in extensive hunting and fishing directed only at finding dogfood.’ James Urquhart (1935: 195), already in the 1930s, observed what he interpreted as the metabolic boost that hard-working sled dogs might gain by consuming fish that were in a fermented or ‘high’ state rather than fresh.” (Speth, J. D.. 2017)
“Though far from a controlled experiment, he noticed that dogs feeding on a diet of fresh fish lost weight, while those eating fermented fish held their weight or actually gained a little. Other arctic visitors noted what may be an expression of the same phenomenon—sled dogs sometimes refused to eat meat, fish, or fat when it was fresh, clearly preferring the fermented or rotted varieties (e.g., Rae 1850: 126; see also Leechman’s 1950: 132–133 amusing description of the food preferences of Inuit dogs). Urquhart came to the conclusion that the behavior he observed in sled dogs stemmed from the fact that the ‘high’ fish were already partly predigested by the fermentation process, and hence could be ingested and assimilated by the dogs with less effort and at less caloric cost than was the case when the fish were fresh (see Stefansson’s 1920 alternative and, in my view, rather far-fetched way of explaining what appears to be the same behavior in his sled dogs). It is worth noting in this regard that dogs worldwide have no problem consuming carrion (e.g., Gipson and Sealander 1976; Kamler et al. 2003). In fact, free-ranging domestic dogs in rural areas of Zimbabwe have been so successful at locating and scavenging carrion that they are actually outcompeting vultures and, at least in some instances, even hyenas. Vultures seem to be facing similar intense competition from free-ranging and feral domestic dogs in other parts of Africa as well (Butler and du Toit 2002; see also Vanak et al. 2014). ” (Speth, J. D.. 2017)
“Thus, it seems clear that eating fermented or putrefied meat and fish offered important benefits to northern foragers. But so too did eating these same foods raw, whether fresh or frozen. And of course at times northern foragers chose to cook some of their food, although the cooking was almost invariably minimal to an extreme. What we are missing is an understanding of the factors that ultimately determined the relative importance—both seasonally and annually—of fermented versus raw versus cooked meat and fish in these foodways. This is an interesting issue that would be well worth exploring further in the ethnohistoric and ethnographic realm, as it might provide us with a comparative framework that would be helpful for modeling the role played by these alternative means of food preparation in analogous environments in the Paleolithic.” (Speth, J. D.. 2017)
INSIGHTS FROM FOOD SCIENCES AND FORENSICS
Speth begins “by clarifying some confusion that may already have arisen concerning the meaning of terms like ‘fermented,’ ‘spoiled,’ ‘rotted,’ ‘putrid,’ and ‘rancid.’ In popular usage, if someone describes meat as ‘fermented,’ most of us would assume that it is safe to eat, though not necessarily something we would find personally to our liking. However, if a meat or fish product is described as ‘spoiled,’ we would assume it is something we should not eat, and it may, in fact, be unsafe. And there would probably not be any hesitation about what to do with meat or fish that is characterized as ‘rotten’ or ‘putrid.’ Foods in that state of decomposition would be destined for the trash can or compost pile at the earliest opportunity. But this is precisely where things become confusing, the point where cultural values and practices become inextricably mixed together with genuine issues of health and safety. And this confusion is not just in the way these terms are used in day-to-day parlance but spills over into the scientific literature as well. Thus, when a murder victim dies and the body starts to decompose, a forensic scientist would be likely to refer to what was happening as the onset of ‘putrefaction.’” (Speth, J. D.. 2017)
“In contrast, a food scientist dealing with pork sausages at exactly the same stage of decomposition would refer to the process as ‘fermentation’ (compare, for example, Lana and Zolla 2016, and Pittner et al. 2016: 422). And, counterintuitive as it may seem, even meat that is so putrid that it literally reeks and is filled with maggots may nonetheless be perfectly safe to eat, the smell notwithstanding. In other words, just because meat is putrid does not mean it contains unsafe levels of pathogens—‘putrid’ and ‘pathogenic’ are not synonymous. And as will become eminently clear in what follows, many, perhaps most, northern foragers, at least until quite recently, actually considered foods in just that sort of thoroughly putrefied state—maggots and all—as rewarding and delicious. And they appear to have been utterly unfazed by odors that would almost certainly trigger an instant gag reflex among most Westerners.” (Speth, J. D.. 2017) The attitude towards what we consider as offensive odors is something I have come across many times in my life and by reading first hand accounts of people who arrived at the Cape of Good Hope. They often made mention of the offensive smell of the native people and clearly, the indigenous peoples were not phased by their own smell. I have therefore been aware of the fact that our judgement of various smells is a learned behaviour, but never extended this logic to putrid food.
Speth clarifies one additional point before moving on. He uses “the term ‘rancid’ to refer specifically to the degradation of lipids in meat or fish in the presence of oxygen, an ‘autoxidation’ process quite distinct from what happens to lipids that are fermented or putrefied. The ethnohistoric and ethnographic literature frequently conflates these two processes and as a result can be quite confusing, if not downright misleading. Thus, northern foragers were perfectly content to eat meat, fat, and oils that were fermented or putrefied because they knew they were safe and nutritionally beneficial, but they would go to considerable lengths to prevent the fats in meat and especially fish and marine mammals from going rancid, because these could be unsafe to eat or even toxic. In other words, it is important to keep in mind the important distinction between meat and fish that have been fermented, even rotted and full of maggots, from meat and fish in which the fats have become oxidized and rancid.” (Speth, J. D.. 2017)
He turns to the technical literature on fermented meat and notes that he was “not particularly surprised to find that the vast majority of this literature is commercially oriented, with very little attention to homemade products, but he was shocked by its overwhelming focus on sausages. Moreover, the vast majority of these studies assume from the get-go that meat fermentation requires salt, usually lots of it, and often substantial quantities of nitrites (to maintain an appealing reddish coloration). In addition, the focus of these works is almost invariably on products of the northern hemisphere, especially European ones, and to a lesser extent those of the United States and Canada.” (Speth, J. D.. 2017)
“Studies of the fermented meats of East and Southeast Asia, at least those published in languages that he can either read directly or translate online (and be rewarded with results that are not complete gibberish), are rare by comparison. Two entire continents—Africa and South America—for all practical purposes do not exist in much of this literature. And as far as fermentation’s past is concerned, the vast majority of authors simply assert what appears to be little more than inherited wisdom, namely that fermentation saw its beginnings a few millennia ago in the Near East and Egypt with the development of beer, wine, and bread. The possibility that fermented foods—both meat- and plantbased—might have been independently developed, and at an equally early date, in the New World is seldom considered, and the idea that fermentation might have its roots, not among farming peoples of the Neolithic or Bronze Age, but among earlier hunter-gatherers, borders on heresy.” (Speth, J. D.. 2017)
“Needless-to-say, the authors of the vast majority of these studies appear to be utterly unaware that there are peoples like Inuit and Siberian reindeer herder-hunters who still make extensive use of fermented and putrefied meat and fish, and who have probably done so for centuries, if not millennia; and that there is archaeological evidence that not only takes early fermentation out of the Near East, but pushes its origins back at least 9,200 years to the early Mesolithic (e.g., Boethius 2016).” (Speth, J. D.. 2017)
“The way ‘spoilage’ is defined in much of the commercial fermentation literature is also relevant because it is frequently based more on Western cultural values than on actual threats to health. For example, fermented meat products such as sausages typically are evaluated by panels of experts who deem a product ‘spoiled’ if its taste, odor, texture, or color are ‘off’ by comparison to some mutually agreed upon standard. It is not surprising, then, that Euroamerican explorers, missionaries, traders, trappers,colonial officials, military personnel, and ethnographers—raised in essentially the same Western cultural milieu as these expert panel members—would bring to the Arctic similar cultural values and biases. No wonder they were ‘less than enthusiastic’ about meat and fish that had been deliberately rotted in the ground or in a river or pond for weeks or months, often to the point that they reeked and were crawling with maggots (Diptera fly larvae, probably Hypoderma [Oedemagena] tarandi, Pape 2001; Schrader et al. 2016; Wood 1987; see Guthrie 2005: 6 for an interesting comment about their value as a fat source in the modern Arctic and their likely similar role in the Upper Paleolithic).” (Speth, J. D.. 2017)
“No wonder these visitors were disgusted when their hosts proceeded to eat the maggots right along with the rotten meat, and did so with obvious gusto! To unaccustomed Westerners in their midst, the stench of foods like ‘stinkhead’ (deliberately rotted fish heads) was so overpowering that even the hardiest visitors had to exit the dwelling to vomit, only to be met with laughter when they re-entered their host’s abode (Stefansson 1914: 160).
Aided nowadays by TV, movies, expanding market economies, and intrusive government policies, Western attitudes about food have been rapidly winning out, supplanting the traditional foods and foodways of the north. Younger generations are now embarrassed by ‘stinkhead’ and ‘rotted flipper,’ foods once relished by their elders; and, in deference to the newly acquired ‘Westernized’ tastes of their grandchildren, the elders are abandoning many of their traditional foodways. An unfortunate consequence of the speed with which this transition has taken place is that we are left with little in the way of ‘hard’ data concerning the traditional ways of fermenting and rotting meat and fish (for examples of wonderful exceptions to this generalization, see Frink and Giordano 2015; Jones 2006): How widespread was the practice of fermentation in the past? How similar or different were such practices among coastal Inuit and interior boreal forest groups? How much of traditional Native diet was comprised of fermented foods?
Were foods fermented primarily during certain seasons of the year, or was fermentation employed more or less consistently over the entire year? How were these foods actually fermented or rotted? For example, how big and deep were the pits? How often were seal pokes used rather than pits? When pits were used, what species of plants were selected to line them and what specific properties made them suitable for the purpose? How were the foods placed in the pits? Randomly? Layered? What was the setting of the pits in terms of local topography, hydrology, soils, shade? How much air was allowed to circulate within these pits? How often were foods stored in bogs, ponds, and rivers? In what depth of water? How were the foods anchored so they did not begin to float as fermentation gases began to form? Did they need protection from terrestrial carnivores? From carnivorous fish?” (Speth, J. D.. 2017)
“It is clear that the fermentation process is not a simple matter, but one that required a great deal of expertise and practical experience in order to end up with a product that was safe to eat (Frink 2009). This is clearly shown by the significant rise since the 1970s and 1980s in the incidence of botulism—a potentially deadly illness caused by the toxin of the bacterium Clostridium botulinum. The marked upswing in botulism occurred hand-in-hand with the introduction of plastic bags, glass bottles, and other supposedly more ‘hygienic’ methods of fermenting meat and fish (Chiou et al. 2002; Fagan et al. 2011; Shaffer et al. 1990).” (Speth, J. D.. 2017)
“Cases of botulism remain very rare when these same foods are fermented in below-ground pits or in lakes and ponds using traditional methods (Fagan et al. 2011: 585). The rapid disappearance of fermented and rotted meat and fish from contemporary Native cuisine, coupled with the prevailing Western view that such foods are ‘disgusting’ and presumably unhealthy, if not downright toxic, has created the (misleading) impression that ‘stinkhead,’ rotted meat, and fermented reindeer and ptarmigan stomach contents were at best marginal resources, and more likely served as very minor fall-back or starvation foods that were resorted to when ‘preferred’ foods were in short supply. I suspect this view has spilled over into the archaeological conscience as well, and is part of the reason why Paleolithic archaeologists have paid so little attention to the issue, despite the fact that the environments that were home to Neanderthals and Upper Paleolithic peoples in Pleistocene Eurasia were broadly similar to those inhabited by northern hunter-gatherers of the ‘ethnographic present.’ Even some of the key resources were similar (e.g., reindeer, red deer or North American elk, bison, salmon).” (Speth, J. D.. 2017)
“The difficulty in ‘seeing’ fermented and rotted foods in the archaeological record certainly has not helped matters, but if no one ‘asks the question,’ we will never find out just how important these foods might have been to the lives and lifeways of Paleolithic foragers. Before turning to the ethnohistoric and ethnographic evidence of fermentation and putrefaction among northern latitude peoples, one final point is worth noting here. Food scientists are not the only ones who study fermentation. As already hinted at, that is also what a lot of forensic specialists do. They just use different vocabularies and have quite different ultimate goals (Campobasso et al. 2001; Crippen et al. 2016; Forbes and Carter 2016; Wallace 2016). What the former call ‘fermentation,’ the latter call ‘putrefaction’ or ‘decomposition,’ and, of course, the former are concerned largely about culturally determined standards of taste, texture, color, and smell, while for obvious reasons the latter are concerned, not about palatability or food safety, but about estimating how long a body has been dead—the so-called ‘postmortem interval’ or PMI. But regardless of whether a homicide victim was buried in a shallow grave or dumped in a lake, the decomposition of the body follows more or less the same trajectory that one sees in the rotted meat and stinkhead relied upon by circumpolar hunters and gatherers—i.e., breakdown of proteins, carbohydrates, and lipids through a combination of endogenous and exogenous processes, shifts in pH and available oxygen, growth of lactic acid bacteria, infestation of corpses by Diptera larvae (maggots), and so forth. So, while it may at first seem counterintuitive, archaeologists who seek to develop the middle range theory and methods needed to investigate past reliance on fermented and putrefied meat might benefit greatly by enlisting the help, not just of nutritionists and food scientists, but also of forensic specialists who study the decomposition of corpses. In a surprising number of ways, both are studying the same thing.” (Speth, J. D.. 2017)
The close relationship between meat preservation and the morbid art of embalming is more evidence that these matters must be considered in close aassociationand that throughout history, the one field of science influenced the other since meat preservation as in meat science and embalming of human corpses seek to halt the process of putrification and decomposition.
ETHNOHISTORIC AND ETHNOGRAPHIC EVIDENCE
Speth gives us a “tour of the ethnohistoric and ethnographic literature to gain some idea of how meat, fish, and fat were fermented or deliberately rotted. Many of these descriptions focus on marine mammals and fish, but there are also quite a few mentions of caribou and reindeer, some hunted by coastal groups, others taken by inland peoples.
“In terms of the Paleolithic, marine mammals and fish were clearly far more important to Upper Paleolithic peoples than to Neanderthals, but both hominins made heavy use of reindeer (and of course other ungulates as well). But perhaps the major message of this section is not which specific animals were utilized, but the process of fermentation or putrefaction itself, the many ways it was accomplished, the times of year when such foods were prepared and when they were consumed, and the state of the final products that were considered desirable as food. Though perhaps a bit lengthy, he asserts that this section is best presented using direct quotes, since the original words of the authors convey much better than he possibly could what was involved in the fermentation process and, of particular importance to the present discussion, the relish and gusto with which these Native populations perceived what to most Western observers were viewed as utterly disgusting, noxious, and quite likely toxic foods. These quotes should make it eminently clear that rotted foods were not merely emergency resources when all else failed; these were routinely used and highly desired foods, readily prepared and easily stored over weeks or months, and pre-digested with little and often no need for cooking.” (Speth, J. D.. 2017)
Speth points out “a few important biases in this potpourri of examples. Probably the most significant one is a direct outcome of what he calls his deficiencies as a linguist. By not being able to read any of the Scandinavian languages, and not faring any better with Russian, he believes his sample from northern Eurasia is minimal at best. He says that someone able to read these languages, particularly as they were written prior to the 19th century, would undoubtedly find a wealth of additional material. There are of course lots of language translators, both free versions online and expensive commercial varieties, but his experience with both types has been frustrating. For him, they remain virtually hopeless when it comes to inferring context and, as a result, often generate output that amounts to little more than gibberish, both grammatically and in terms of content. Moreover, such translators are of little or no help in browsing digital libraries full of lengthy 18th-century travel accounts, only in translating brief snippets of text that we already knew about in advance. For him, his sample is heavily biased toward the North American Arctic, and to sources either written in English or subsequently translated into English by someone fluent in the language.” (Speth, J. D.. 2017)
“Another key source of bias arises from the fact that Euroamerican explorers visited the more sedentary coastal peoples, both Inuit and Northwest Coast groups, far more often than they did the mobile hunting bands living deep within the interior of Alaska and Canada. Moreover, in searching the vast literature available online, the Inuit are by far the easiest to work with, as spellings have morphed only slightly over the last several centuries (e.g., Esquimeau, Esquimaux, Eskimos, Inuit). Names given to Northwest Coast groups are more complex and more varied, and those given to the countless bands of Athabaskan and Algonkian speakers spread across the interior of Alaska and Canada are a veritable nightmare, a bewildering array of referents and orthographies that make it exceedingly difficult to conduct a thorough search of the foodways of any one group.” (Speth, J. D.. 2017)
Speth believes that “the cumulative result of these many biases is a preponderance of examples derived from Inuit groups and a heavy focus on marine mammals and salmon. Interior groups are less well represented, with concomitantly fewer examples of the treatment of terrestrial resources like moose (Alces), caribou, and freshwater fish. In order to provide some semblance of balance in what follows, he decided to order the examples chronologically. This accomplished two things.
First, it interdigitated examples from coast and interior, so that the reader does not come away with the mistaken view that preparing and eating putrefied meat and fish was largely a coastal phenomenon involving marine resources. And second, by ordering the examples from earliest to latest, one can see that putrefied meat and fish have been an integral part of northern cuisine from the time of first contact to the present, and remain highly valued symbols of the ‘old’ ways to this very day.” (Speth, J. D.. 2017)
“What knowledge they haue of God, or what Idol they adore, wee haue no perfect intelligence. I thincke them rather Anthropophagi, or deuourers of mans fleshe, then otherwise: for that there is no flesh or fishe, which they finde dead, (smell it neuer so filthily) but they will eate it, as they finde it, without any other dressing. A loathsome spectacle, either to the beholders, or hearers.” (Settle 1577: not paginated)
“They will, with so great an Appetite and Greediness, feed upon the rotten and stinking Seal Flesh, that it turns the Stomach of any hungry Man, who looks upon them.” (Egede 1745: 135)
“In Spring and Summer they catch a large Quantity of Fish, and digging Holes in the Ground, which they line with the Bark of Birch, they fill them with it, and cover the Holes over with Earth. As soon as they think the Fish is rotten and tender, they take out some of it, pour Water upon it, and boil it with red-hot Pebbles…and feed upon it, as the greatest Delicacy in the World. This Mess stinks so abominably, that the Russians who deal with them, and who are none of the most squeamish, are themselves not able to endure it.” (Mueller 1761: ix)
“[Feb. 13, 1780]—I was compelled by hunger to try some of the frozen venison of the Eskimos, for I had not a morsel of my own provision left. I was surprised to find how agreeable it tasted, although some of it had a rancid smell, having been killed in summer and remained ever since buried under stones. They had taken the skin off but left the intestines and paunch in the carcass. The Eskimos were extremely pleased to see that I could relish their meat, and said that I would suffer no hunger with them now as they had plenty of Mammuck—that is, stinking meat.” (Taylor and Turner 1969: 145)
“The most remarkable dish among them; as well as all the other tribes of Indians in those parts, both Northern and Southern, is blood mixed with the half-digested food which is found in the deer’s [caribou’s] stomach or paunch, and boiled up with a sufficient quantity of water, to make it of the consistence of pease-pottage. Some fat and scraps of tender flesh are also shred small and boiled with it. To render this dish more palatable, they have a method of mixing the blood with the contents of the stomach in the paunch itself, and hanging it up in the heat and smoke of the fire for several days; which puts the whole mass into a slate of fermentation….” (Hearne 1795: 316–317)
“The stomach of no other large animal beside the deer [caribou] is eaten by any of the Indians that border on Hudson’s Bay. In Winter, when the deer feed on fine white moss, the contents of the stomach is so much esteemed by them, that I have often seen them sit round a deer where it was killed, and eat it warm out of the paunch.” (Hearne 1795: 317–318)
“About twelve we also observed an Indian walking along the North-East shore, when the small canoes paddled towards him. We accordingly followed, and found three men, three women, and two children, who had been on an hunting expedition. They had some flesh of the reindeer, which they offered to us, but it was so rotten, as well as offensive to the smell, that we excused ourselves from accepting it.” (Mackenzie 1814: 42)
“Our hunters found their canoe and the fowl they had got, secreted in the woods; and soon after, the people themselves, whom they brought to the water side. Out of two hundred geese, we picked thirty-six which were eatable; the rest were putrid, and emitted a horrid stench. They had been killed some time without having been gutted, and in this state of loathsome rottenness, we have every reason to suppose they are eaten by the natives.” (Mackenzie 1814: 86)
“Several of them had bags of blubber, mixed with halfputrid half-frozen flesh; these they offered for sale with great eagerness, and appeared very much surprised that they got no purchasers. Being anxious to examine their contents, I was induced to buy one; on opening it, however, such a shocking stench proceeded from it, that I very cheerfully restored it to the original possessor. I had no sooner returned it to him, than applying the open extremity to his mouth, took a drink from it, licked his lips, and laid it aside very carefully.” (M’Keevor 1819: 33)
“When the Esquimaux visit us from the tent, they generally go to the spot where the carcases of the whales are left to rot after the blubber is taken, and carry away a part, but generally from the fin or the tail; they have been known, however, to take the maggots from the putrid carcase, and to boil them with train oil as a rich repast.” (West 1824: 173)
“The spawn of the salmon, which is a principal article of their provision; they take out, and without any other preparation, throw it into their tubs, where they leave it to stand and ferment, for though they frequently eat it fresh, they esteem it much more when it has acquired a strong taste, and one of the greatest favors they can confer on any person, is to invite him to eat Quakamiss, the name they give this food, though scarcely any thing can be more repugnant to a European palate, than it is in this state; and whenever they took it out of these large receptacles, which they are always careful to fill, such was the stench which it exhaled, on being moved, that it was almost impossible for me to abide it, even after habit, had in a great degree dulled the delicacy of my senses.” (Jewitt 1849: 88)
“…I have frequently known them when a whale has been driven ashore, bring pieces of it home with them in a state of offensiveness insupportable to any thing but a crow, and devour it with high relish, considering it as preferable to that which is fresh.” (Jewitt 1849: 97)
“Their manner of preserving their meat is quite characteristic. When an animal is killed the bowels are extracted, then the fore and hind quarters are cut off, and being placed inside the carcass, are secured by skewers of wood run through the flesh. The whole is then deposited under the nearest cleft of rock, and stones are built round so as to secure it from the depredations of wild animals until the hunters return to the coast; when the meat is in high flavour, and considered fit for the palate of an Esquimaux epicure.” (McLean 1849: 140)
“It is well known that both Esquimaux and Indians are very fond of the contents of the paunch of the rein-deer, particularly in the spring, when the vegetable substances on which the animal feeds are said to be sweeter tasted. I have often seen our hunter, Nibitabo, when he had shot a deer, cut open the stomach, and sup the contents with as much relish as a London alderman would a plate of turtle soup.” (Rae 1850: 150)
“They [Carrier] all prefer their meat putrid, and frequently keep it until it smells so strong as to be disgusting. Parts of the salmon they bury under ground for two or three months to putrefy, and the more it is decayed the greater delicacy they consider it.” (Wilkes 1851: 452)
“De la mi-juin à la mi-juillet, les Tchiglit se livrent à la pêche du hareng, du poisson blanc et de l’inconnu, dans les innombrables chenaux du Mackenzie. Ils conservent le poisson qu’ils ne consomment pas, soit en l’exposant à la fumée d’un petit feu, soit en le mettant en saumure dans des outres pleines d’huile de marsouin qu’ils suspendent à des arbres. Il ne se peut concevoir d’odeur semblable à celle qui s’exhale de ces vaisseaux, lorsque les Esquimaux les ouvrent pour en déguster le contenu. Toutefois, il m’a paru que ces poissons crus et rouges de fermentation doivent être un excellent mets, tant nos Tchiglit les mangent avec voracité.” (Petitot 1876: 12)
“When these fish are caught, they are put into a seal-skin bag, and it remains tied up till the whole becomes a mass of putrid and fermenting fish, about as repulsive to taste, sight, and smell as can be imagined.” (Kumlien 1879: 20)
“With an axe the rib pieces were soon severed from the back-bone, and then from the inside of these the natives cut strips with their sheath-knives, and handed me a chunky morsel from the loin, as breakfast. I bit into it without any ceremony, while the dogs clamored frantically for a share. So long as it remained frozen the meat did not exhibit the vile extent of its putridity; but directly I had taken it into my mouth it melted like butter, and at the same time gave off such a disgusting odor that I hastily relinquished my hold upon it, and the dogs captured it at a single gulp. The natives first stared in genuine astonishment to see me cast away such good food to the dogs, and then burst forth into hearty laughter at my squeamishness. But I was not to be outdone, much less ridiculed, by a Yakut, and so ordered some more, perhaps a pound of the stuff, cut up into little bits. These I swallowed like so many pills, and then gazed on my Yakut friends in triumph; but not long, for in a little while my stomach heated the decomposed mess, an intolerable gas arose and retched me, and again I abandoned my breakfast,—my loss, however, becoming the dogs’ gain. At this the natives were nearly overcome with mirth; but I astonished them by my persistence, requesting a third dose, albeit the second one had teemed with maggots; and, swallowing the sickening bits as before, my stomach retained them out of pure exhaustion.” (Melville and Phillips 1885: 226–227)
“But the ‘loudest’ feast of these savages consists of a box, just opened, of semi-rotten salmon-roe. Many of the Siwashes have a custom of collecting the ova, putting it into wooden boxes, and then burying it below highwater mark on the earthen flats above. When decomposition has taken place to a great extent, and the mass has a most penetrating and far-reaching ‘funk,’ then it is ready to be eaten and made merry over. The box is usually uncovered without removing it from its buried position; the eager savages all squat around it, and eat the contents with every indication on their hard faces of keen gastronomic delight—faugh!” (Elliott 1887: 56–57)
“Ikwa…returned in a jubilant frame of mind, and announced his discovery of a cached seal. He asked Mr. Peary if he might bring the seal to Redcliffe in the boat, saying it was the finest kind of eating for himself and family. We could not understand why this particular seal should be so much nicer than those he had at Redcliffe; but as he seemed very eager to have it, we gave him the desired permission, and off he started, saying that he would be back very soon. About half an hour later the air became filled with the most horrible stench it has ever been my misfortune to endure, and it grew worse and worse until at last we were forced to make an investigation. Going to the corner of the cliff, we came upon the Eskimo carrying upon his back an immense seal, which had every appearance of having been buried at least two years. Great fat maggots dropped from it at every step that Ikwa made, and the odor was really terrible. Mr. Peary told him that it was out of the question to put that thing in the boat; and, indeed, it was doubtful if we would not be obliged to hang the man himself overboard in order to disinfect and purify him. But this child of nature did not see the point, and was very angry at being obliged to leave his treasure. After he was through pouting, he told us that the more decayed the seal the finer the eating, and he could not understand why we should
object. He thought the odor ‘pe-uh-di-och-soah’ (very good).” (Diebitsch-Peary 1894: 59–60)
“On one occasion I objected to some fish which an old man brought into the lodge as not being fresh enough, and made signs to that effect, chiefly with the aid of my nose. The old man went away and brought some more which were far worse. On these being rejected he beckoned me to come with him, and leading me to a swampy spot at the back of his barrabora pointed out what I took to be a newly made grave. I made signs of interrogation and deep sympathy, whereupon he scraped away the loose earth with a fish spear and lifted a board which covered the top of the pit. I fully expected to see the body of a dearly beloved relative, and experienced nearly as great a shock when I found the pit was filled to the brim with a seething mass of rotten salmon. The old fellow’s next signs I fully understood; they were to the effect that if I wanted something really good I must give him more than the usual amount of tobacco leaves, and I began to realise that he had misunderstood my sign language and thought I was objecting to his fish because they were too fresh.” (Pike 1896: 258)
“Some of them [Big Bellies] invited us to their huts to eat, in expectation of receiving a bit of tobacco, but we found it impossible to taste their dried meat; it was so nearly putrid that the pieces would scarcely hold together. This, however, is entirely to their liking; they seldom use meat till it is rotten; they keep it in their huts, unexposed to the air, till it is almost impossible for a stranger to remain indoors on account of the stench arising from putrefaction.” (Coues 1897: 356–357)
“The chief delight of the Indians is in the parts of the fish that are usually discarded by white people. For instance, they line a pit with the big leaves of the skunk cabbage, which grow here to enormous size. Then they fill the space with the heads of the hump-backed salmon, and, lining the top over nicely with more skunk cabbage leaves, they cover it all over with dirt. Now this would be a very commendable way of disposing of the salmon heads if they were satisfied to leave them covered up, but they are not. After about six weeks they give a grand potlatch, which continues as long as this delicate and artistic creation holds out.” (Maris 1897: 6)
“Meat is frequently kept for a considerable length of time and some times until it becomes semiputrid. At Point Barrow, in the middle of August, 1881, the people still had the carcasses of deer which had been killed the preceding winter and spring. This meat was kept in small underground pits, which the frozen subsoil rendered cold, but not cold enough to prevent a bluish fungus growth which completely covered the carcasses of the animals and the walls of the storerooms.” (Nelson 1899: 267)
“In the district between the Yukon and the Kuskokwim, the heads of king salmon, taken in summer, are placed in small pits in the ground surrounded by straw and covered with turf. They are kept there during summer and in the autumn have decayed until even the bones have become of the same consistency as the general mass. They are then taken out and kneaded in a wooden tray until they form a pasty compound and are eaten as a favorite dish by some of the people. The odor of this mess is almost unendurable to one not accustomed to it, and is even too strong for the stomachs of many of the Eskimo.” (Nelson 1899: 267)
“Their food is largely salmon, though seal, beluga, and walrus also enter their diet when they can be obtained, and occasionally a deer or moose is taken. Their food is all preferred ‘high’—not high in the sense of the epicure, but rotten; rancid oil is generally cooked with it or used for sauce. The decaying carcass of a whale cast on the beach attracts the natives for many miles, and a grand feast is held over it; rotten salmon heads are a bonne bouche.” (Moser 1902: 178)
“As soon as the salmon come into this lake, they go in search of the rivers and brooks, that fall into it; and these streams they ascend so far as there is water to enable them to swim; and when they can proceed no farther up, they remain there and die. None were ever seen to descend these streams. They are found dead in such numbers, in some places, as to infect the atmosphere, with a terrible stench, for a considerable distance round. But, even when they are in a putrefied state, the Natives frequently gather them up and eat them, apparently, with as great a relish, as if they were fresh.” (Harmon 1903:
“The Carriers cut off the heads of salmon, and throw them into the lake, where they permit them to remain a month, or at least until they become putrefied. They then take them out, and put them into a trough, made of bark, filled with water. Into this trough they put a sufficiency of heated stones, to make the water boil for a time, which will cause the oil to come out of the heads of the salmon, and rise to the top of the water. This they skim off, and put into bottles made of salmon skins; and they eat it with their berries. Its smell however is very disagreeable; and no people would think of eating it excepting the Carriers.” (Harmon 1903: 284)
“Although it would be easy to construct storerooms in the frozen ground, in which meat could be preserved in good condition, the Chukchee are satisfied with their ill-protected cellars, in which the provisions soon begin to become putrid. Therefore both the Reindeer and the Maritime Chukchee live on putrid meat throughout the summer and part of the winter.” (Bogoras 1904: 195)
“When the Reindeer people drive their herds to the summer pastures, ten or fifteen animals are slaughtered for summer provisions. The meat is simply hung in the tent, where it is easily accessible to carrion-flies and other insects. After several days it is placed in a pit dug in the centre of the tent, and covered with sod. Later on, when the pit is opened, the stench is so strong that it is disagreeable to the natives themselves, who avoid staying in the tent until the pit is covered over again.” (Bogoras 1904: 195)
“Among the upper tribes, particularly among the Déné, where the salmon are not taken in such quantities as nearer the mouth of the Fraser, the heads are always carefully preserved for making oil. They are strung on willow rods and deposited in the water on some sandy shore of the lake or stream, where they remain till they have reached an advanced stage of decay. When ‘ripe’ they are gathered up and placed in large trough-like receptacles, and boiled by means of the usual heatingstones. During the boiling the oil rises from them to the surface and is skimmed off into birch-bark buckets, and afterwards stored away in bottles made from the whole skins of the salmon.” (Hill-Tout 1907: 94–95)
“We travelled on till 5.30 that evening, and camped under a conspicuous hill at what is called by the natives ‘The Canoe Portage.’ Here the dogs started off at a run, much to our surprise, making us think that a bear or some deer were in the neighborhood; but this was not the case: they had scented a dead whale, which must have been on the coast for five years at least, and soon proceeded to dig under the snow. The natives took them out of their harness, and proceeded to put up the tents, not knowing what was buried there. While this was being done the dogs made several holes down in the snow, and tried in vain to tear away some portion of blubber or meat from the carcass. The natives soon got to work with their knives, and cut off great quantities of putrid flesh, which, though frozen, stunk with so bad an odour that I found it most repulsive; they then proceeded to eat great quantities of it, and seemed to enjoy it as much as the dogs did. I could not join in this feast, but retired into my tent to enjoy a meal of tea, which greatly refreshed me.” (Harrison 1908: 241–242)
“The Carriers had other delicacies, among which half putrid salmon roe and a most stinking oil extracted from the same fish held a prominent place. The wayfarer through their country cannot help observing the many cavities, evidently of an artificial origin, that dot the immediate vicinity of their streams or the outskirts of their ancient village sites. They are the pits wherein the fish roe was formerly deposited and covered up with
earth for a twelvemonth or so. At the expiration of that time, it was considered sufficiently done, and consumed, raw or cooked, generally with preserved berries. In this advanced stage of putrefaction it was deemed the most dainty morsel imaginable, though Harmon is inconsiderate enough to declare that it then fills the air with a terrible stench, and even to a considerable distance. This is certainly true of their nasty fish oil, and I heartily concur in his statement that ‘a person who eats this food, and rubs salmon oil in his hands, can be smelt in warm weather, to a distance of nearly a quarter of a mile’.” (Morice 1909: 597)
“In eating raw fish today, only slightly high, I could barely smell it in the tent, Anderson (Kotzebue) had to go out and throw up what he had eaten, at which the other five (some Kogmollik, some Nunatama) who were eating in our tent laughed very much. When he came back he told me that his people never eat raw fish unless it is well rotted. The Kogmollik and Nunatama prefer it a little rotten, but are fond of it in all stages from fresh from the net to a cheesy consistency. His people bury fish in the ground to rot it.” (Stefansson 1914: 160)
“The Arctic ‘salad,’ which seems to be favoured more in winter, when no vegetable food has been seen for months, is the first stomach or rumen of the caribou when it happens to be filled with freshly-chewed reindeer-moss or Cladonia lichens. This is frozen whole and sliced off very thin, the gastric juice supplying the acid, and a liberal mixture of seal-oil the salad dressing. The caribou stomach is seldom eaten except when filled with the succulent reindeer-moss, and when it contains woody grass-fibre is usually discarded. This food may properly be classed as ‘pre-digested,’ and under certain extenuating circumstances, such as a trail appetite, a long siege of one-course rations of meat, anything ‘different’ may have some attractions, but few white men venture to experiment with it.” (Anderson 1918: 64)
“Usually, when parties leave for the summer deer hunting, the old people and some of the children stay at the tide water at the mouth of a good salmon river, and put all the fish they take under piles of rocks. The earlier caught become very soft and rotten, and are used in winter as dog feed, though the natives enjoy a feed of this ‘for a change,’ just as they do rotten walrus or seal meat, which in white nostrils smells to high heaven.” (Munn 1922: 271)
“…lann = Fleisch, das im Wasser aufbewahrt worden ist, vor allem Wildrenfleisch im Sommer. Dadurch hat das Fleisch einen säuerlichen Geschmack erhalten und wurde als eine Leckerei angesehen.” […lann = Meat that has been kept in the water, especially wild meat in the summer. This has given the meat a sour taste and has been regarded as a treat.] (Väinö Tanner ; cited in Pohlhausen 1953: 988; English translation mine)
“Right alongside the spot where we pitched our camp we found an old cache of caribou meat—two years old I was told. We cleared the stones away and fed the dogs, for it is law in this country that as soon as a cache is more than a winter and a summer old, it falls to the one who has use for it. The meat was green with age, and when we made a cut in it, it was like the bursting of a boil, so full of great white maggots was it. To my horror my companions scooped out handfuls of the crawling things and ate them with evident relish. I criticised their taste, but they laughed at me and said, not illogically: ‘You yourself like caribou meat, and what are these maggots but live caribou meat? They taste just the same as the meat and are refreshing to the mouth.’” (Rasmussen 1931: 60)
“In the beginning of my educational and medical service among the western Eskimos I witnessed them eat ing such rotten foods as would kill a white man in short order. When they began digging their vitiated surplus meats from the caches and snow banks, when their diet consisted of rotten fish and rancid fat, I would suffer many qualms as to the probable outcome. Invariably I would provide my medicine kit with items for combating and treating ptomaine poison, and always, to my utter astonishment, they would eat those rotten poisonous foods and thrive on them. Lest the reader might think that the cooking process would destroy the poisons in their vitiated foods, I wish to say that in only a few instances did they cook their food.” (Garber 1938: 245)
“Caribou meat and fish are eaten raw, and are preferred when they are rotten; caribou and fish heads are boiled. The caribou liver is allowed to ferment inside the mossfilled caribou stomach under a hot sun for some days before being eaten….” (Sinclair 1953: 72)
“Decayed fish were not eaten during the warm weather; they were not considered good until frozen. As soon as the freeze-up came, they began to be used as delicacies, sometimes as whole meals. The only way of serving decayed fish was to allow them to thaw in the house until they were as soft as hard ice cream, when they were eaten somewhat as a child would consume an ice cream cone.” (Stefansson 1960: 36)
“A delicacy for them [Nganasan] was the meat of wild reindeer that had been left for a time where they were caught without being disemboweled. Such meat soon became putrid and had an unpleasant taste….” (Popov 1966: 111)
“…in autumn the Karelians buried wild reindeer meat in a swamp and left it there until winter came…and the Kola Lapps threw young reindeer into a lake or river, where they were left until they had a slightly ‘bad smell’.” (Eidlitz 1969: 108)
“Interviews with 12 elderly Yupik [Inuit] indicated that fish heads were traditionally fermented in clay pits dug in the ground; families used the same pit each year…. The Athabascans [non-Inuit Native Americans] ferment fish by floating a string of fish heads in the river for one to two weeks.” (Shaffer et al. 1990: 392; see also Wainwright
“We now saw these people again at their own village, and what a smelly place it was!—racks and racks of dried fish, hundreds of salmon heads strung up, and sacks full of salmon eggs rotting and oozing out. They buried these eggs in pits to keep for dog feed in the winter, they told us, but I believe this was to ferment them for human consumption, just as we process and enjoy certain strong cheeses. Since the Indians were sensitive to possible ridicule from white people, these people probably wanted us to think that this smelly treat was only for the dogs. The last year’s pits were open and full of putrid water
and maggots.” (de Laguna 2000: 287)
“The walrus is a valued source of traditional food, prized for its meat which is often fermented for several months inside the skin of the walrus buried in the ground.” (Fediuk 2000: 54)
“Along the Arctic shore, Fermented Stink Heads were made by digging a hole in the sand and lining it with acidic leaves. Ideally, the hole was filled with salmon heads and one- or two-day-old salmon eggs, preferably from vigorous humpies or pinks caught on their way upriver to the spawning grounds. The heads were either individually wrapped or simply layered with wild celery leaves and/or dried eelgrass. They were then covered with more acidic leaves and topped with wood or dirt, and the contents left to ripen for two weeks to two months.” (Spray 2002: 38)
“The second method for procuring fermented foods was as a by-product of hunting. Herbivorous animals frequently ate lichens and grasses toxic to humans, but after the plants had decomposed or fermented in the animal’s stomach, aided by the acidic digestive juices, the stomach contents were perfectly safe for human consumption. These naturally fermented foods came from the stomachs of freshly killed moose and caribou, or from ptarmigan intestines. With a slightly sweet, earthy taste, they were a hunter’s reward, and very welcome.” (Spray
“…the owner also kept fish heads under water by hanging them on a line attached to the cutting table, one step in preparing a fermented fish dish commonly called ‘stink heads.’” (Fall et al. 2010: 61)
POND STORAGE: THE DANIEL FISHER EXPERIMENTS
“The ethnohistoric and ethnographic literature clearly shows that meat and fish can be preserved for many months by fermentation and putrefaction. Two of the technologically least complex and most widely used approaches were to bury the food in a shallow pit or immerse it in a bog, lake, or river. And while in both types of storage putrefaction of the final product was often so advanced that it was viewed by Westerners as disgusting and probably dangerous to eat, such foods in fact were not only free of serious pathogens, but were highly esteemed by the foragers who depended on them.” (Speth, J. D.. 2017)
“From the examples highlighted above, one might easily come away with the impression that storage of meat and fish by putrefaction only works in arctic and subarctic environments. However, the many references provided earlier show that rotted meat also was frequently eaten by Hadza, San (Bushmen), Maori, and other hunter-gatherers across a wide range of habitats and environments, and done so with a degree of gusto and impunity not unlike what we have seen among Inuit and their boreal forest neighbors. I think the major reason such techniques are so prominent in the northlands stems from the fact that they provide a low-cost solution to problems that are especially critical in the higher latitudes: (1) they reduce the high metabolic costs of an all-meat diet by softening the food and ‘pre-digesting’ the protein and fat without the need for cooking and hence without the need for scarce fuel; (2) they allow fatty meat and fish to be effectively preserved for long periods of time in areas where frequent damp, cloudy, or rainy weather precludes rapid and thorough drying of fatty meat and fish; and, (3) they offer a means of preserving critical vitamins, especially vitamin C, that are not available from other types of foods such as fruits and vegetables, and that would otherwise be degraded or lost through cooking, as well as through traditional methods of food storage such as drying and smoking.” (Speth, J. D.. 2017)
“That deliberately putrefied meat can provide healthy nourishment, with little or no fear of infection from pathogenic bacteria, far beyond the arctic and subarctic has been clearly demonstrated by a unique set of experiments carried out in the state of Michigan (USA) by Daniel Fisher (1995), a paleontologist at the University of Michigan in Ann Arbor. Fisher began his experiments by caching mostly small, uncooked leg units of lamb during the fall of 1989 in shallow ponds and bogs in the southern part of the state. This part of the Midwestern USA is characterized by hot humid summers (average high in July: 28°C/83°F) and short, moderately cold winters (average low in January: –8°C/18°F). Given Fisher’s ultimate interest in late Pleistocene megafaunal extinctions, he decided to expand these initial experiments to look at the feasibility of storing meat from a much larger animal under water. Thus, in the early 1990s he began pond-caching partially disarticulated carcass units of a 680kg (1,500 lb) draft horse. He put these units into a shallow pond during the winter by inserting them through holes chopped in the ice and then anchored them to the bottom using sediment-filled intestines. At regular intervals, he collected representative samples of meat and checked them for odor, physical condition, and bacterial content.” (Speth, J. D.. 2017)
“The results of these experiments are very informative. The meat retained an essentially fresh appearance, with low total bacterial counts, until spring, when it began to turn more acidic and distinctly ‘cheesy’ smelling. Fisher attributes the change to increasing activity of lactic acid bacteria (LAB), particularly members of the lactobacilli. By April, algae began to cover the exterior of the butchery units, but beneath the outermost surface both meat and fat were still entirely edible, though sour-smelling. The meat became increasingly ‘cheesy’ and sour-tasting by June, but remained free of pathogens. Finally, in July and August the meat, while still safe to eat, began to disintegrate. Needless to-say, disintegration, and hence loss of meat, posed less of an issue for the carcass units that Fisher cached in bog settings, and I would imagine that the same would be true for meat cached in below-ground pits.” (Speth, J. D.. 2017)
“The takeaway message from Fisher’s experiments is that ‘pond storage’ (i.e., storage in water, whether in a bog, pond, lake, or river) is a relatively low-tech, yet safe way of caching meat and fish at strategic points on the landscape, one that can be used over a wide range of environments, including regions far removed from the arctic and subarctic, and over much of the year, not just in winter when the pond or river is covered over by ice.” (Speth, J. D.. 2017)
THE DISGUST RESPONSE
“Judging by the countless ethnohistoric and ethnographic descriptions of the relish with which hunter-gatherers—especially in the northern latitudes, but in many other parts of the globe as well—ate putrid meat, fish, and fat, it seems quite clear that the intense disgust that Euroamerican adults typically display to both the odor and sight of such placfoods (revulsion, nausea, facial expressions of disgust) is in large measure a culturally learned response, not a human universal that comes hard-wired in all of us at birth. In fact, a large number of studies, beginning with the classic work of psychologist Paul Rozin, show that children probably do not begin to acquire the classic disgust responses that we associate with the sight and smell of rotten and maggoty meat until as late as the age of five or even later (see Herz 2012: 46–47; Rozin et al. 2008: 765; Widen and Russell 2010). And up to at least the age of seven children commonly misinterpret as anger the adult facial expressions connected with disgust (Widen and Russell 2010). Anthony Synnott (1991) takes an interesting look at the issue of smell from a sociological perspective, providing a fascinating and wide-ranging overview of the many subtle and complex ways in which our perceptions of odors—good versus bad, pleasing versus foul—have come to be inextricably interwoven into the very fabric of our culture, playing fundamental roles in demarcating and maintaining ethnic identities, economic statuses and social classes, racial categories, moral and ethical valuations, and numerous other aspects of our day-to-day economic and social lives. Such studies make it clear that we learn the disgust response to substances like rotten meat, we are not born with it. It should come as no surprise, then, that children in traditional arctic and subarctic foraging societies were brought up with a set of cultural values very different from
our own, ones in which their taste and smell preferences were closely and compatibly aligned with foodways that were nutritionally sound, energetically realistic, and fuelsparing in resource-poor high-latitude environments that were arguably among the most difficult ones on the planet for members of our species to successfully colonize.” (Speth, J. D.. 2017)
“The results of these studies are interesting from an evolutionary perspective. If the disgust response to a substance like rotten meat is not fully in place at the time of weaning, a young child’s point of greatest vulnerability to orally introduced vectors of diarrheal and other diseases, this calls into question the conventional wisdom that the disgust response evolved as a way to protect the youngster from putting pathogenic substances in his or her mouth (Rottman 2014). It is also important to keep in mind that the sharp upswing in botulism cases in the arctic since the 1970s and 1980s is largely an outcome, not of traditional methods of rotting foods in pits, bogs, lakes, and seal pokes, but of the introduction by Euroamericans of supposedly more ‘hygienic’ means of fermenting these foods using sealed glass bottles and plastic bags placed in more ‘sanitary’ (and often above-ground) locales. Bottom line: for hunters and gatherers living unacculturated lifestyles, contamination of rotted meat and fish by pathogenic bacteria seems not to have posed much of a health hazard, and for children raised in the traditional foodways of their culture, the smell would have been largely if not entirely irrelevant. As a 19th-century Inuit informant put it: ‘we don’t eat the smell’ (Fienup-Riordan 1988: 11).” (Speth, J. D.. 2017)
“There is one thing that has become strikingly clear in attempting to pull together the many threads that form the heart of this paper. Regardless of disciplinary focus, be it modeling early hominin scavenging in archaeology, or evaluating the safety and acceptability of fermented sausages in the food sciences, or attempting to account for the origin and function of the disgust response in psychology, specialists in these and other fields commonly operate with the same basic underlying assumption—that putrid meat is inherently unsafe to eat because of the (presumed) toxicity created by pathogens such as Clostridium botulinum. One can see the power and pervasiveness of this assumption in the opening paragraph of a fairly recent article on the microbiome of New World vultures (Roggenbuck et al. 2014).” (Speth, J. D.. 2017)
“Citing but a single reference for support (Reed and Rocke 1992), one that is focused heavily on recent outbreaks of botulism in waterfowl, these authors draw the sweeping generalization that:
“The microbiota of vertebrates rapidly begin to decompose their hosts after death. During the subsequent breakdown of tissue, these microorganisms excrete toxic metabolites, rapidly rendering the carcass a hazardous food source for most carnivorous and omnivorous animals.” (Roggenbuck et al. 2014: 1)
The hunter-gatherer literature clearly shows this sort of blanket assertion to be false. If this paper has accomplished nothing else, the reader should at least by now be convinced that rotten meat, even meat that reeks and is filled with maggots, may nonetheless be entirely safe to eat, and that even in an advanced state of decay such meat is (or was until recently) viewed by many traditional foraging peoples as a very desirable food. In short, just because meat is rotten does not automatically mean it is hazardous. The toxicity of a decomposing carcass must be demonstrated, not assumed. Despite the hunter-gatherer evidence, not to mention the tremendous advances in the fields of nutrition, food science, and microbiology, the biases expressed in Roggenbuck
et al.’s (2014: 1) opening paragraph remain steadfastly entrenched.” (Speth, J. D.. 2017)
“Nothing exposes the tenacity of this bias more clearly than the work of William Savage (1921: 83), who nearly a century ago conducted a series of experiments in which he fed putrid meat to kittens. Though his results were not entirely concordant, Savage concluded that ‘a study of the evidence…singularly fails to bring forward any evidence associating the consumption of food in a state of incipient putrefaction with illness in those who consume it.’
Concerning the massive outbreaks of botulism in water birds cited by Reed and Rocke (1992), there is a growing body of literature implicating pollution and eutrophication of water bodies as likely sources of the problem, with toxic algal blooms, nitrate-rich runoff from surrounding farmlands, contamination by sewage and garbage, and use of effluent from wastewater treatment plants to re-establish wetlands emerging as prime suspects (Anza et al. 2014; Murphy et al. 2000). It is a mistake to jump from these altered and often polluted circumstances to the universal generalization that rotten meat, at all times and in all placfoodses, is inherently dangerous.” (Speth, J. D.. 2017)
“But this is not to deny that botulism can at times be a very serious problem. Hence, what we need is a better understanding of the specific constellation of circumstances in which meat on a decaying carcass remains safe to eat (setting aside cultural preferences of taste and smell), versus the conditions in which such meat is likely to become laced with potentially deadly pathogens. The latter is clearly not an inevitable outcome. Moreover, judging from ethnohistoric and ethnographic sources, contamination by pathogens
is not even the inevitable outcome of decomposition in warmer climes. This is shown by reports of hunter-gatherers and small-scale subsistence farmers in many parts of sub-Saharan Africa consuming very ‘ripe’ meat, as for example the putrid meat and blubber retrieved by Bushmen (San) from beached whale carcasses along the Namibian and South African coast, or the rotten meat scavenged by Hadza from lion kills in Tanzania. Thus, there is a real need for further research to help identify the specific constellation of temperature, humidity, air circulation, exposure, and condition of the carcass, as well as the health and physical condition of the animals themselves before they died, that together would foster the proliferation of Clostridium botulinum spores in putrefying carcasses.” (Speth, J. D.. 2017)
“In any case, our revulsion at the sight and smell of rotten meat, and our steadfast belief that such meat is hazardous, are very likely the offspring of our own Eurocentric cultural biases. These biases are reinforced by our lack of first-hand experience as subsistence hunters, and are passed on from generation to generation as a form of inherited wisdom, kept alive, despite our training as scientists, by our failure to look beyond cultures and contexts that fit comfortably within our own familiar value systems. It is a cultural bias, whether applied to human foodways or to the dietary habits of vultures, a bias that would be easier to recognize and address if there were more effective crossdisciplinary communication.” (Speth, J. D.. 2017)
Speth is adament that he is “not suggesting that humans are lacking in any sort of universal capacity to react to unpleasant elicitors with a response that psychologists would classify as disgust. There certainly seems to be no shortage of literature suggesting that they do. What he is questioning here is the supposed universality of the elicitors of such responses, even the so-called ‘core’ elicitors (Rozin et al. 1999: 433), not the capacity itself. As we have already discussed at length, the sight and smell of putrid, maggoty meat and fish are clearly not universally viewed as disgusting. But other ‘core’ disgust elicitors which psychologists often treat as though they were universal—the sight, touch, and smell of feces and urine being prime examples—are just as culturally contingent as putrid meat, as a perusal of the crosscultural ethnohistoric literature quickly shows.” (Speth, J. D.. 2017)
“Looking first at feces, what immediately comes to mind are the classic observations of Johann Jakob Baegert, a Jesuit priest who served for 17 years (1751–1768) as a missionary among the hunter-gatherers of Baja California. It is fair to say that Baegert was genuinely disgusted by the natives’ habit, not just of touching their own feces, but of systematically collecting them in order to retrieve and consume the abundant tiny seeds that had passed undigested through their system after ingesting large quantities of pitahaya (cactus) fruit:
“…I mentioned that the pitahayas contain a great many small seeds, resembling grains of powder, which for reasons unknown to me are not consumed in the stomach but passed in an undigested state. In order to use these small grains, the Indians collect all excrement during the season of the pitahayas, pick out these seeds from it, roast, grind, and eat them with much joking. This procedure is called by the Spaniards the after or second harvest! Whether all this happens because of want, voracity, or out of love for the pitahayas, I leave undecided. All three surmises are plausible and any one of them might cause them to indulge in such filthiness. It was difficult for me, indeed, to give credit to such a report until I had repeatedly witnessed this procedure.” (Baegert 1952: 68;
for the original German text, see Baegert 1773: 119–120)
Homer Aschmann (1959: 77) provides additional details about the ‘second harvest’ in Baja California, noting that other Jesuits, not just Baegert, had observed—apparently with equal disgust—this native practice, and that at least one unsuspecting early missionary to the region—Father Francesco María Piccolo—had inadvertently eaten seeds obtained in this manner.” (Speth, J. D.. 2017)
“Prehistoric pits filled with human feces (coprolites) have been found in a number of dry caves in the Great Basin of the western United States. David Thomas (1985: 380–381) discusses these so-called ‘latrines’ at length, noting that in at least some cases such deliberate fecal accumulations occur together with caches of equipment that had clearly been stored in the caves in anticipation of future use. To Thomas, the placement of the latrines side-by-side with other cached items suggests that the feces had also been stored there, presumably as food reserves for future use—in short, additional examples of Baegert’s ‘second harvest.’
Arctic groups provide additional insights into forager attitudes toward excrement, and show no evidence, at least in these contexts, that feces necessarily elicited feelings of disgust, whether through direct contact or by indirect contamination of other items or foods.
“The use of ptarmigan droppings appears to be limited to a few groups of Eskimos. According to Mathiassen…, ‘Ptarmigan excrement is chewed together with walrus meat into a porridgy mass which is stirred up in blubber.’
Similarly, the Netsilik Eskimos are said by Birket-Smith…to have used ‘ptarmigan excrement mixed with blubber and chewed meat’.” (Eidlitz 1969: 88) “Certain remarks and deeds of Pannigabluk’s today prompt Speth to enter certain things about Eskimo cleanliness, etc. Pan. [Pannigabluk] will clean dog excrement off a sole of a pair of boots with her ulu [knife], wipe it casually with a rag that may have had as bad uses a dozen times before, and then proceed to eat with the ulu or cut up with it food for cooking….” (Stefansson 1914: 226)” (Speth, J. D.. 2017)
“The Hadza of Tanzania provides yet another example of humans freely handling feces with no evidence whatsoever that seeing, smelling, or touching them elicited any sort of disgust response:
“Baobab seed is also a good protein source with adequate levels of five out of eight essential amino acids…. The Hadza chew young seeds; but when mature, the seeds are cracked individually with a stone or pounded into a coarse flour…. Baboons, which have teeth well shaped for seed cracking…can not break the mature seeds and pass them unbroken. Hadza women collect baobab seeds from baboon dung piles, wash them, and prepare them in the normal manner….” (Schoeninger et al. 2001: 182)
Throughout much of Southwest and South Asia, manure from cattle, sheep, and other livestock is systematically gathered, kneaded into dung cakes, dried, and stored in and around the house where it serves as an important fuel for cooking and warmth, a component of household architecture, and at times may even be ingested in some form for medicinal purposes.
“Only the dung of the zebu cow was used to plaster the stove and the kitchen area…. For fuel, dung was molded into relatively flat round cakes which were dried in the sun. Each family had a special place where dung cakes were made and stored…. The women of a family…daily carried dung dropped by the family cattle to this place. Families who owned no cattle collected dung from the village lanes and the grazing area. Traditionally, such dung was available to everyone and it was collected by most families. One important duty of a young daughter was to go early every morning through the village lanes to mark dung that fell from cattle when they were driven to the pond or to the grazing area and later to collect it…One morning we watched a woman from an 11-person family making dung cakes…. She mixed the dung with small pieces of mustard stalks, kneaded it, and formed it into cakes of two sizes which she spread on the ground to dry. In the course of the morning, she made about 24 large cakes and 103 small ones.” (Freed and Freed 1978: 80–81)” (Speth, J. D.. 2017)
““In Iran manure is an especially valued and carefully treated commodity. Rural and urban communities use it to fertilize the soil, produce energy (burning), eradicate pests and plant diseases, make bricks and plaster walls. In some cases it is also used to treat human illness…. For example, the dung of newly born foals mixed with the milk of lactating donkeys is used in some villages to treat whooping cough…. Iranian villagers…utilize animal manure and bird droppings as a source of energy. In some rural areas animal manure is still used to generate heat and women believe that the best fire for baking bread is one made from animal droppings because it produces more uniformly baked and thoroughly toasted bread.” (Ardakani and Emadi 2004: 13)” (Speth, J. D.. 2017)
“As a final example, the Chinese have a long history of using human feces (‘night soil’) as manure in their agricultural fields. Not surprisingly, there is no hint that either the sight or smell of night soil elicited anything resembling a disgust response in the country’s rural inhabitants: “For crops in a vigorous growing state no kind of manure is so eagerly sought after as night soil; and every traveller in China has remarked the large cisterns or earthen tubs which are placed in the most conspicuous and convenient situation for the reception of this kind of manure. What would be considered an intolerable nuisance in every civilised town in Europe, is here looked upon by all classes, rich and poor, with the utmost complacency; and I am convinced that nothing would astonish a Chinaman more, than hearing anyone complain of the stench which is continually rising from these manure tanks…. In England it is generally supposed that the Chinese carry the night soil and urine to these tanks, and leave it there to undergo fermentation, before they apply it to the land. This, however, is not the case; at least, not generally. In the fertile agricultural districts in the north, I have observed that the greater part of this stimulant is used in a fresh state, being of course sufficiently diluted with water before it is applied to the crops.” (Fortune 1847: 314–315)” (Speth, J. D.. 2017)
“Another favorite in the list of supposed ‘core’ universal disgust elicitors—urine—is also effectively demolished by the cross-cultural ethnographic record and, as in the case of putrid meat, the damning evidence again comes from the Inuit. Urine was their principal ‘soap’! “Urine…is collected from the containers in the men’s house only (since that of women is believed to be unclean), to be stored in tubs for at least two days before using. Women…first bathe in urine, followed by a rinsing in either salt or fresh water. Both sexes frequently wash hands and faces in urine, and rinse with water; for urine, coming into contact with the body oils, acts as soap in removing grease and other impurities. The men, when about to take a sweat-bath…gather in the men’s house. There the floor boards are removed and a roaring fire built in the pit…. The participants, soon drenched with perspiration, bathe themselves with urine from the central pot….” (Curtis 1930: 43)
So, it would seem from the studies by Rozin et al. (2008), Rottman (2014) and others, together with the comparative insights drawn from the ethnographic literature, that the search for universal disgust elicitors may be doomed from the outset, because the entire endeavor is based upon what is likely to be a false assumption, namely that the disgust response is a product of natural selection which serves to protect infants from putting bad things in their mouth. As Liberman and colleagues (2016: 9480) put it: ‘…human infants are surprisingly inept at categorizing and selecting appropriate foods.’ Instead, these authors propose a very different type of explanatory framework for the way infants come to distinguish those foods that are acceptable and appropriate (hence ‘good to eat’) from those that are inappropriate, perhaps even disgusting (hence ‘bad to eat’). In their view such distinctions derive, not from intrinsic health-related or nutritional properties of the foods themselves, but from prosocial cues the infants receive from their caregivers and from others in their close social network who engage positively with each other and, importantly, who speak the same language. In other words, from this perspective elicitors of the disgust response are just part of a broad suite of culturally contingent markers that serve to identify socially meaningful groups of people who share a common set of values and beliefs, while simultaneously creating boundaries that differentiate such groups from culturally more distant ‘others’ (for a surprisingly early exposition of the culturally relative nature of people’s perceptions of what does or does not ‘stink,’ see Smollett 1785: 13–14). This culturally nuanced perspective would seem to fit ethnographic reality far better than traditional pathogen-based explanations.” (Speth, J. D.. 2017)
SCHOENINGEN: LATE MIDDLE PLEISTOCENE POND STORAGE?
“Let us turn now to the most speculative part of this already quite speculative endeavor. Did hunter-gatherers already in the Paleolithic also make use of fermented or more thoroughly rotted meat? My answer to this question is a very tentative ‘maybe.’ There are a number of potential candidates in the literature, all cases in which foragers may have cached partly butchered carcasses underwater in a bog, lake, or river. The earliest of these, and the one which I will discuss in some detail below, is the late Middle Pleistocene German site of Schoeningen (Schöningen). Two other noteworthy candidates, also in Germany but much younger than Schoeningen, are the very late Upper Paleolithic Hamburgian and Ahrensburgian reindeer kill sites of Meiendorf and Stellmoor (Rust 1937, 1943; see also Price et al. 2017).” (Speth, J. D.. 2017)
“At both of these localities, but perhaps most dramatically seen in the Ahrensburgian layers at Stellmoor, large numbers of reindeer carcasses, some complete or nearly so, many as partly butchered units, were placed in the water, some apparently deliberately anchored to the bottom with rocks. Not surprisingly, controversy swirls around the interpretation of these two sites, with some authors favoring deliberate underwater storage of reindeer meat (see Chatterton 2005: 68 and especially Grøn 2005: 21); others championing alternative though not necessarily mutually exclusive explanations, such as kills that were simply abandoned because they were not needed or because the animals were in poor condition, or carcasses that were placed in the water as ritual offerings, or discarded waste from shore-based butchery activities (e.g., Bokelmann 1979, 1991; Bratlund 1996; Grønnow 1987; Pohlhausen 1953; Rust 1937, 1943). The site of Schoeningen, located about 100km (62mi) east of Hanover, Germany, is one of the most interesting and important Paleolithic sites in western Europe.” (Speth, J. D.. 2017)
“Consisting of numerous separate localities exposed by open-cast lignite mining, Schoeningen is perhaps best known for the so-called ‘Horse Butchery Site’ or ‘Spear Horizon,’ designated officially as Schö 13 II-4. This locality, dated around 310 ky and attributed to the latter or deteriorating stages of the late Middle Pleistocene Reinsdorf Interglacial (MIS 9), has so far produced nine wooden-tipped spears and one lance, as well as numerous other miscellaneous wooden items, many thousands of animal bones, the majority representing some 45–50 very large (~550kg) horses (Equus mos bachensis), and a modest lithic assemblage consisting mostly of flakes and debitage, and small numbers of formal tools such as scrapers. Handaxes and Levallois technique are noteworthy for their absence. Most assign the site to the late Lower Paleolithic, although it could easily be considered an early Middle Paleolithic site. As to the hominin responsible, opinions differ, some suggesting the occupants belonged to the taxon Homo heidelbergensis, others seeing Schoeningen as an early Neanderthal manifestation (Conard et al. 2015; Jöris and Baales 2003; Richter and Krbetschek 2015; Schoch et al. 2015; Serangeli et al. 2015; Urban and Bigga 2015; van Kolfschoten 2014; van Kolfschoten et al. 2012).” (Speth, J. D.. 2017)
“There is general agreement that the primary focus of activities at Schö 13 II-4 was the killing, butchering, and processing of horses, and that the site’s location right on the margin of a shallow lake would not have been well suited as a primary camping spot. Moreover, earlier claims for hearths at the site have been rejected, further supporting the idea that Schö 13 II-4 was a functionally specific hunting-butchering locale rather than a longer-term campsite. There is less agreement about how the horses were killed. Until recently it was believed that the animals had been driven en masse into the lake or a muddy or swampy area along its borders (Thieme 2005: 130; Voormolen 2008). However, more recent evidence—stable isotopes, dental wear patterns, and age-sex data—all converge to suggest that the accumulation of horse remains is more likely the
product of multiple events involving different herds and perhaps taking place at somewhat different times of year (Conard et al. 2015; Julien et al. 2015; Rivals et al. 2015).
None of this evidence by itself necessarily points to pond storage. Countless Paleolithic sites occur in lake shore settings, so finding a similar situation at Schoeningen at first glance would not seem the least bit unusual. The surprise came from the detailed studies of the sediments and associated microfossils which encased the bones, in combination with trace evidence found on the surfaces of the bones themselves. According to Stahlschmidt and colleagues (2015: 83), ‘the micromorphological, FTIR, and organic petrology results of the investigated profiles indicate that all three sedimentary layers associated with archaeological remains…formed under permanent water coverage….’ These authors go on to conclude that ‘the sediments directly associated with the archaeological finds…show no signs of ancient pedogenesis or desiccation, and were deposited subaqueously’ (Stahlschmidt et al. 2015: 87).” (Speth, J. D.. 2017)
“Of course, there are other ways besides pond storage that bones and lithics can end up commingled in lake bottom sediments. One could kill and butcher the horses directly on the surface of the frozen lake. Any debris left on the ice would obviously end up on the bottom when the ice melted. Or one could conduct all of the butchering and processing activities on shore and simply toss the stones and bones into the lake. And, of course, one can envision a much more complex mix of activities in which some carcasses were processed on shore, others on the ice, after which some parts of the kill were then stored in the pond while others were taken somewhere else. Then, at some point later in the year, hunters returned to the lake and retrieved some of the meat, perhaps processing it further on the ice or on the nearby shore, and concluded by disposing of their trash either on shore or in the lake—in short, a complex palimpsest of inter-related activities.
None of this proves that pre-Neanderthals or Neanderthals were storing meat in the lake at Schoeningen. But Speth thinks there is enough evidence, both from the ethnohistoric record and from Schoeningen itself, to suggest that pond storage is a plausible hypothesis, one that should be considered along with others that are currently on the table for discussion (e.g., shore-based butchery with trash disposal in the lake; butchery and trash disposal on the frozen surface of the lake). Needless-to-say, because of the ever-present problem of equifinality and other confounding sources of uncertainty and error, there is no way we can ever ‘prove’ which of these alternative hypotheses is the correct one. The ‘real answer’—the ‘truth’—may well be something of which nobody has yet thought. So, rather than steadfastly championing one or another favored explanation, what we need to do instead is to work with multiple competing hypotheses and with equal vigor attempt to falsify all of them (Platt 1964). In that spirit, he suggest that pond storage should be added to the list of plausible scenarios as a serious contender.” (Speth, J. D.. 2017)
ARCHAEOLOGICAL IMPLICATIONS AND CONCLUSIONS
Speth reminds us again that this is a speculative endeavor. “At this point we do not know whether fermented and rotted foods played any role whatsoever in the diets and foodways of Eurasian foragers at any stage during the Paleolithic. We presently lack both the methods and the middle range theory needed to make such practices ‘visible’ in the distant past. But he hopes he has been able to convince his readers that such foods very likely did play a role, perhaps a very important one, and finding out is essential because an answer in the affirmative would mean that a number of our ideas and interpretations of the Paleolithic record may have to be rethought, some quite substantially.
First and foremost, fermenting and rotting meat and fish are low-cost and ‘low-tech’ means of effectively ‘predigesting’ protein and fat without having to cook them. This greatly reduces the energetic costs of chewing, digesting, and assimilating them, and—unlike cooking—it can be accomplished ‘passively,’ that is, when the foragers are engaged in other activities, or for that matter while they are not even present in the area. Cooking, in contrast, requires near constant vigilance, both to keep from overcooking and ruining the meal, and to keep the fire alive and burning at the right temperature.
Moreover, fermentation is ideally suited for preserving and storing fatty meat and fish in climates plagued by incessant rain, overcast skies, fog, or dampness, conditions which can make it exceedingly difficult to dry foods quickly enough to prevent spoilage (autoxidation or rancidity) without having to use up scarce fuel to speed up the process.
The fermentation process also generates important B-vitamins and preserves precious vitamin C that would be degraded or lost if the meat were cooked. In addition to sparing fuel and reducing metabolic costs, preserving vitamin C may be one of the key reasons why fermentation and putrefaction rather than cooking is so important to foragers subsisting on heavily meat- or fish-based diets. So, if Eurasian Neanderthals were in fact ‘top predators,’ as seems to be the current consensus among Paleolithic archaeologists, they likely faced the same vitamin C constraint as modern northern foragers do, making it very likely that they too had to depend on both raw and fermented or rotted meat.
If fermented and rotted foods did, in fact, constitute a major part of Neanderthal diet for the reasons just enumerated, that might help to explain the curious on-again, off-again evidence for fire in the European Middle Paleolithic. In fact, variation across time and space in the frequency of hearths, ash deposits, and charred bones, particularly during colder climatic episodes, may help us track the changing emphasis on cooked versus raw and fermented foods in Middle and Late Pleistocene foodways.
It is perhaps important to note in this context that when traditional hunting and gathering peoples in the high arctic actually took the time to cook their meat (and fish), they generally did so, not by roasting, but by very lightly and rapidly blanching or boiling it (e.g., Harry and Frink 2009: 334). One reason for this preference was fuel economy; another, as already noted, preservation of vitamin C (Bender 1979). Hans Saabye and Georg Fries (1818: 254), commenting more than two and a quarter centuries ago on the typical manner of cooking by Greenland Inuit, stated quite emphatically that ‘they boil meat and fish an equal time, so that when the former is hardly more than half done, the latter fall to pieces. They do not know how to roast any thing.’
A century later Herbert Aldrich (1889:180), commenting upon the way Alaskan Inuit prepared food, put it simply: ‘the only way of cooking meat is boiling.’ Warburton Pike (1892: 51-52) came to much the same conclusion: ‘the general method of cooking everything in the lodge is by boiling, which takes most of the flavour out of the meat, but has the advantage of being easy and economical of firewood.’ Ernest Burch (1988: 70) in a much more recent overview of Inuit culinary techniques again emphasized the importance of boiling: ‘Eskimos did not dissipate the nutritional potential of their food by overcooking it. Great quantities of meat and fish were eaten raw, usually in either dried or frozen form. When they did cook their food they normally boiled it, usually lightly, and drank the broth.’ Finally, Zona Spray (2002: 36) provides helpful insight into what ‘lightly’ actually means in the context of Inuit boiling: “…the term ‘boil’ might be a misnomer. Not once did I see a bubbling pot. Rather, the liquid gently shimmered at a perfect poaching temperature. With a limited heat source, if a pot did boil, it was only for a short time…. Sometimes only a tiny amount of water was needed to braise or steam. But no matter the exact cooking method, the descriptive term was always ‘boiled.’”
Though speculative, it is quite likely that Neanderthals also boiled some of their food, for the same reasons that their ethnohistoric counterparts did, and despite the conspicuous absence of either fire-proof containers or the telltale signs of stone-boiling. One can readily boil water in a perishable hide, paunch, or birch bark container, placing it directly on the hot coals or suspended immediately over the flames, without fear that the vessel will be consumed in the process. Despite their flammable nature when empty, such containers will not ignite, even when heated with a Bunsen burner or blowtorch, so long as they remain filled with liquid (Speth 2015). In fact, direct boiling in this manner is faster, cleaner, and much more fuel efficient than stone-boiling. Thus, if Neanderthals—for reasons of fuel economy and preservation of vitamin C—did prepare some portion of their meat by light, rapid boiling rather than by roasting, such behavior might well contribute to a number of ‘troublesome’ taphonomic problems for the Paleolithic archaeologist. For example, judging from the ethnohistoric literature many of the hearths used for this kind of cooking are likely to have been small and very ephemeral, the fuel at times simply gleaned from locally available herbaceous vegetation and even mosses, and the incidence of burning on bones, flint chips, and other objects minimal or non-existent.
In short, if Paleolithic foragers prepared meat by rapidly boiling it, the result would be a further amplification of the ‘on again, off again’ nature of the evidence for Middle Paleolithic fire use. Deliberately fermenting or rotting meat and fish also provides a safe and effective ‘low-tech’ way to store foods over periods of months, even during the warmer summer months. By simply placing fermenting and rotting animal foods in a shallow pit, or under a pile of rocks, or in a bog, or on the bottom of a pond, foragers can create valuable food caches at strategic points on the landscape. Judging from the literature, these cache points are often either just outside the camp or settlement, along major routes, or close to where a given resource is procured (e.g., fishing spots, caribou/reindeer intercept points, etc.). Thus, the obvious scarcity of pits directly within Middle Paleolithic sites would not rule out the possibility that Neanderthals nonetheless routinely cached meat in pits, since such features are not likely to be located where most of our excavations tend to be focused. In any case, if Middle and Upper Paleolithic foragers were routinely rotting meat and fish in caches, regardless of whether these caches were in pits, bogs, or ponds, it would be misleading to characterize their cultural systems as ones that lacked storage. Quite the contrary, such dispersed forms of food caching may well have been a vital part of their adaptations, and a key factor in their decisions about when and where to move and the route they should take in order to get to their final destination while minimizing the specter of starvation en route. We just have not figured out ways to ‘see’ these caches yet. As I became increasingly aware of the likely importance of fermented and rotted foods in the diets of northern peoples, I began to wonder whether such practices might have an impact on the results of stable isotope analyses, particularly ∂15N values. When foragers consume meat or fish that has been rotted for weeks or months, they are not just eating protein, they are eating millions upon millions of bacteria, both anaerobic and aerobic, plus a complex ‘soup’ composed of endogenous and exogenous enzymes and metabolites generated during the progressive decomposition of the food.
Of all the complex by-products and end-products of putrefaction, perhaps the ones of most relevance to stable isotope studies are the volatile compounds that are generated along the way, and particularly three nitrogenous gases—ammonia, cadaverine, and putrescine (Abdel-Aziz et al. 2016; Buňková et al. 2016; Carter and Tibbett 2008: 31; Cobaugh 2013; Donaldson and Lamont 2013; Janaway et al. 2009: 316, 318; Komitopoulou 2012; Metcalf et al. 2016: 161; Min et al. 2007; Paczkowski and Schütz 2011; Pessione and Cirrincione 2016; Ruiz-Capillas and Jiménez-Colmenero 2004; Sander et al. 1996; Thorn and Greenman 2012: 3). This volatiles, which result from the breakdown of proteins and amino acids, are important contributors to the characteristic bloating and foul smell that typically accompanies the decay of animal matter (Dent et al. 2004). The extent to which these gases are formed depends on a host of factors, among which are: the amount of time that has elapsed since death (the postmortem interval or PMI); available moisture; ambient temperature; whether the carcass was buried, left on the surface, or immersed in water; the properties of the surrounding soil; characteristics of the animal itself, including species, body size, health at the time of death, how much body fat was present, whether or not the animal was gutted, dismembered, and skinned before putrefaction; and how often the meat cache was accessed by the hunters over the course of its use-life. Given this array of potentially confounding factors, it is very hard, without a great deal of baseline experimental work, to predict the quantity of nitrogenous gases that would have been generated and subsequently lost to the surrounding air, water, or soil. But this issue is interesting and potentially important because the production of these volatiles, especially ammonia, may have left the putrefied food cache enriched in 15N (I am grateful to Margaret Schoeninger for pointing out this likely source of enrichment, personal communication, April 2017).
There seem to be very few studies that have explicitly looked at the impact of putrefaction on stable isotope values, and the ones that I have come across deal solely with aquatic resources and experimental designs that are not necessarily ideal analogs for deliberate human meat caching. Nevertheless, their results do show that putrefaction of meat over sufficient lengths of time and under appropriate ambient temperature conditions can elevate 15N values quite substantially (see, for example, Wheeler et al. 2014: 116 for an example involving salmonid fish decomposing in a terrestrial setting; see also Burrows et al. 2014 and Yurkowski et al. 2017). Whether the extent of such enrichment would have been sufficient to elevate 15N values to levels comparable to what we see in European Neanderthals (Bocherens 2011; Wißing et al. 2016) is, of course, unknown, but it would seem to be a possibility well worth looking at more closely.
There are other components of traditional arctic diet that might also lead to 15N enrichment. Eating the larvae of warble flies found beneath the skin of caribou and reindeer, a widespread practice among northern foragers, might be one such mechanism, as would consuming the maggots that almost invariably infested their putrefied meat and fish (Anderson and Nilssen 1996; Bennett and Hobson 2009; Hocking et al. 2009; Nilssen 1997; I am grateful to Melanie Beasley for drawing my attention to this possibility, personal communication, March 2017). As far as the maggots are concerned, the degree of enrichment would again depend on many factors, including whether the rotted food was terrestrial or aquatic in origin, marine or freshwater, the degree to which the food was putrefied, and, of course, on how substantial a contribution the maggots made to the overall diet of the foragers. Turning now to taphonomy, if Paleolithic peoples made regular use of fermented and rotted foods, this would almost certainly add to the complexity of our assessments. For example, by reducing the extent to which Paleolithic foragers relied on any sort of cooking, the incidence of burning will almost certainly be affected, as will the presence of hearths, ash lenses, scattered charcoal, and thermally ‘popped’ or potlidded flints. Likewise, since fermentation and rotting greatly soften meat by breaking down the component proteins, one can often simply pull chunks of meat from the bones with little or no need for a knife. This should lead to a significant reduction in filleting marks. Joints also become easier to separate and can often be pulled apart with little or no cutting. Could these consequences of putrefaction perhaps also blur some of the cutmark criteria we use to distinguish early carcass access and hunting from later access and scavenging?
Interestingly, modern arctic foragers sometimes freeze the rotted product before they eat it. This seems to be particularly common with rotted fish heads and caribou or reindeer stomach contents. Freezing of course makes the meat more difficult to remove from the bones and may well lead to a more haphazard pattern of cutmarks, much like that seen on some of the human bones from the famous 19th-century Alferd [sic] Packer wintertime cannibalism case in the North American Rockies (Rautman and Fenton 2005), and from Middle Pleistocene Schoeningen (Starkovich and Conard 2015), although it would probably not explain the haphazard pattern seen by Stiner et al. (2009) at Qesem Cave in Israel. Rotted fish heads were often frozen before being eaten and then allowed to thaw just enough so that they took on the texture of a firm ice cream cone (Stefansson 1960: 36).
They were then eaten, bones and all. Since the spine of the fish was often removed and discarded at or near where the fish was caught, not at camp, the only direct evidence at a residential site that fish were part of the diet—the heads—were first reduced to the consistency of mush by fermentation and then eaten in their entirety, leaving virtually no evidence behind for the archaeologist to find. It is worth noting in this context that rotted fish heads (stinkfish, stinkhead, or stinky head) were, hands-down, among the all-time favorite foods of peoples across the entire North American Arctic, so fish consumption in analogous environments during the Paleolithic may indeed be difficult to detect!
Continuing with the taphonomic theme, how attractive are bones of fermented and rotted animals—with or without marrow—to carnivores like hyenas? And how
long does that attraction persist? What consequences might fermentation have on the number and location of carnivore gnaw marks, punctures, and other sorts of damage? Finally, ungulate bones are often thought to have been used as fuel during the Eurasian Middle and Upper Paleolithic. Are the combustion properties of fermented and unfermented bones comparable, or does fermentation alter a bone’s utility as a fuel? What effect might fermentation have on the value of an epiphysis for subsequent grease rendering?
One could go on enumerating other ways in which reliance on fermented and rotted meat and fish might impact the way we interpret the Paleolithic record. I hope what I have presented here is at least sufficient to show that these types of foods may have been of great importance to foragers in the past. Deliberate fermentation and putrefaction were not just last resort strategies to stave off starvation when all other means failed, nor were they a mere culinary curiosity. Instead, fermenting and rotting meat and fish may well have been vital to the survival and success of Paleolithic ‘top predators’ as they penetrated the colder climes of Pleistocene Eurasia. We now need to pool our collective efforts, expertise, and imagination in order to find ways to evaluate these speculations and, in the process, move toward a more complete and realistic understanding of the lifeways and adaptations of these stone age pioneers of the colder climes.” (Speth, J. D.. 2017)
One of the things I find compelling about the notion presented by Speth is the natural progression from eating putrified meat, to the development/ discovery of fermentation technology which is inherently part of the process of decomposition. The value of storing animals underwater or in pits would have been discovered very easily and “naturally.” I can well imagine that this was discovered long before preservation my salt, whether sodium chloride or one of the nitrate salts. There is even the possibility that preservation by ammonium may predate the use of salt due to its formation from urine. These matters are however complete speculation. Much more and fascinating work remains. In the process, we learn! And we apply to our world today!
Speth, J. D.. 2017. Putrid Meat and Fish in the Eurasian Middle and Upper Paleolithic:
Are We Missing a Key Part of Neanderthal and Modern Human Diet? Department of Anthropology, University of Michigan. PaleoAnthropology 2017: 44−72.
Carcass Sanitation and Primal Washing
Eben van Tonder
In certain environments, bacterially contaminated pork carcasses are inevitable. The old principle of garbage in, garbage out applies to a meat factory also. One can not make excellent products from bacterially contaminated meat. Good hygiene starts far back in the supply chain.
The Case Against Carcass and/ or Primal Cut Washing
The Meat Inspectors Manual for Abattoir Hygiene, published by the Directorate Veterinary Services at the South African Department of Agriculture(2007) makes two important points.
The first one is that “microbes firmly attach to meat and skin.” They explain that ” this process is not yet well understood but it appears to become irreversible with time – the longer organisms remain on the meat the more difficult it becomes to remove them. In poultry processing, the contact period between the meat surface and contaminating organisms is reduced by washing carcasses at intermediate points during processing before attachment occurs.” They then caution that the principle should not be applied to larger carcasses because too much wetting spreads rather than removes contamination. In fact, when small volumes of feces, intestinal contents, mud or soil are spread over the carcass by rinsing, the clean areas of the carcass can become quite heavily contaminated. This is the reason why carcasses should not be rinsed. Wet carcasses also tend to spoil more rapidly – especially if wet and warm. Un-split carcasses should never be washed and split carcasses should only partially washed under lowest pressure possible.” (Meat Inspectors Manual)
They summarise their position as follows, that “the meat under the skin of a healthy animal is sterile. The slaughtering process must be aimed at keeping the bacterial load on the newly exposed meat surface as low as possible and all efforts should be made to prevent bacteria from being deposited on the carcass. It is necessary to ensure that nothing that touches the exposed meat is contaminated with micro-organisms. By using the correct slaughtering techniques with this aim in mind, a high degree of sterility is indeed possible under commercial conditions. This is shown by the fact that fresh meat when vacuum packed and maintained at 0 °C, as produced for export in Australia, New Zealand and SADC countries, has a microbiological shelf-life of up to 6 months.” (Meat Inspectors Manual)
The question is, however, what to do if good abattoir practices do not exist? If either the carcass or the meat may very likely have been contaminated? In Lagos, Nigeria, for example, I visited abattoirs in 2017 and could not find a single facility where any form of hygienic slaughtering took place. In most cases, refrigeration equipment was not available and where it was, it was poorly maintained and in an appalling sanitary state. Conditions may not be this extreme in other places in the world, but even in South Africa, I have been in abattoirs with very poor hygienic conditions, especially small facilities. How does one treat such meat to make it acceptable for further processing? Can anything be done?
The Case for Carcass and/or Primal Cut Washing and the Use of Organic Acids
A number of researchers have found that an integration of sanitizing methods, such as knife trimming in combination with other antimicrobial decontamination methods such as steam vacuuming, hot water and acid sprays systems and steam pasteurization can help to improve the microbial safety of carcasses after slaughter (Gorman et al 1995, Castillo et al, 1998, Castillo et al 1999, Pipek et al 2004).
“Medynsky, Pospiech, and Kniat (2000) found that an increase of the lactic acid concentration in meat above the level of 0.5% enhanced water holding capacity and reduced thermal loss. In another study, Jimenez-Villarreal et al (2003) found that lactic acid treatments on beef trimmings before grinding could improve or maintain the same sensory and instrumental color, sensory odor, lipid oxidation, sensory taste, shear characteristics and cooking characteristics as traditionally processed ground beef patties. Therefore the use of these antimicrobial treatments could be used in industry as a measure of safety improvement without negatively impacting the fresh product. Carcass decontamination utilizing organic acids is a sanitation process that is widely used in the industry, and has been studied deeply. In 1995 (Netten, Mossel and Veld, 1995), found that lactic acid decontamination was capable of eliminating salmonellae from pork, veal and beef carcasses, and that this compound is also likely to be effective against C jejuni. This bacterium is at least 10-fold more sensitive to lactic acid than Salmonella. Furthermore, counts of C. jejuni on freshly slaughtered veal, pork, and beef carcasses are also up to l00-fold lower than those of Salmonella. Castillo et al., (1998) compared the effect of different decontamination interventions on E. coli O157:H7 inoculated on beef carcasses. Lactic acid rinses in combination with water wash, trimming and hot water reached reductions from 4.2 to 5.0 log CFU/cm2. Lactic acid is frequently used for beef carcass decontamination. Its ability to reduce pathogens or other organisms of fecal origin has been studied extensively showing that lactic acid has a strong antibacterial effect. Besides the antimicrobial effect, the studies reviewed show that the use of lactic acid as a meat sanitizer does not have a significant impact on sensory and/or physic-chemical characteristics.” (Rodriguez, 2004)
In light of these findings, I have used lactic acid and acetic acid spray to decontaminate carcasses, to reduce the total bacterial populations on the carcass surface. I have also used acetic acid, lactic acid and a combination of lactic acid and sodium bicarbonate which produce a sodium acetatesolution for washing primals and trimmings that were exposed to questionable handling and storage conditions. All these measures have proved to be very effective. A typical reduction of as much as 90% (1 log10 cycle) can be expected from organic acid rinses. (Dickons, et al, 1996)
“The effectiveness of these acids will vary depending on the following:
the application temperature,
the amount of time spent spraying a carcass (it seems self-evident, but research clearly shows that the more time is spent on a carcass, the cleaner it will be)
the water pressure,
the distance of the hose nozzle from the carcass, especially if hot water is used because the closer the nozzle is to the carcass, less heat is lost as the water travels through the air.
the sensitivity of the native microflora to the specific compound, and
to a certain extent the design of the specific equipment.
Organic acids are self-limiting due to discoloration of meat which occurs at or above the 3% concentration level.
The following procedures is suggested. “Wash only one carcass at a time. It is important for the worker to give each carcass the full attention that it needs and will reduce cross contamination.
Distance from the hose nozzle to the carcass during spraying is important.
Use a gentle sweeping motion to apply the lactic acid to the entire carcass surface.
Work methodically from top to bottom to ensure that all carcass surfaces are treated with lactic acid.
Initially, a garden sprayer can be used. This type of sprayer is relatively inexpensive and simple to operate. In general, garden sprayers operate with a gentle flow rate. Use of this sprayer to thoroughly rinse a carcass may require extra time so that an adequate amount of 2% lactic acid is dispensed. Also, many of these garden sprayers are not equipped with a pressure gauge and require manual exertion to pressurize (unless retrofitted as described below).” (Flowers, 2006).
“It is reported that in general, hot water is more effective at removing bacteria than warm or cold water. Hot water may discolor muscle tissue that is exposed on carcass surfaces. Therefore, consider using warm water if hot water is not used to wash carcasses. Washing carcasses with cold water do remove bacteria by virtue of physical force; yet, it does very little to injure or kill bacteria that may remain on carcass surfaces. This step is so counter-intuitive to meat processing staff that comparative studies must be done to validate the procedure.” (Antimicrobial Spray Treatment, 2005)
Pipe, et al, (2004) found that warm spray was more effective even for a lactic acid spray. They found that it “is generally recommended to prefer warm solutions of lactic acid for the carcass decontamination. We tested the temperature decrease during the application and we were able to find that the drops of lactic acid solution at the moment when they fall on the carcass surface are up to 10 °C cooler than the original solution. We ascribe this temperature decrease partly to the heat exchange between drops and surrounding air and partly to evaporation of water from drops, which have a relatively high surface. Thus the temperature of drops of about 35−40 °C on the meat surface corresponds to the temperature of 45 °C of the original solution.” They showed that “the effect of lactic acid is higher, if its solution is warm (45 °C) in comparison with the cold solution (15 °C). The effect was higher with pork carcasses than with beef carcasses. In the case of the carcasses that were decontaminated with warm lactic acid solutions, the lag phases were prolonged by one day; during following days of cold storage, the differences decreased.” (Pipek, et al, 2004)
“The water stream is most forceful at the opening of the hose nozzle. The water loses momentum the further it has to travel. As with temperature, it is a good idea to keep the nozzle no more than 30cm from the carcass surface. A Sanitizing Halso system will be developed for the future to replace the garden hose spray. It will be designed to deliver the lactic acid solution at a maximum pressure of 40 psi. FSIS has no current requirements concerning the minimum and maximum pressure for organic acids (i.e., lactic acid, acetic, and citric acid) when they are applied onto livestock carcasses. However, the rescinded FSIS Directive 6340.1—Acceptance and Monitoring of Pre-Evisceration Carcass Spray (PECS) Systems, dated 1/24/92, stated that the spray pressures should be limited to 50 psi.” (Antimicrobial Spray Treatment, 2005)
“In general, research has demonstrated that the more time that is spent washing a carcass, the cleaner it will be. Washing the carcass for a longer period of time allows the force of the water to detach more bacteria and debris. It is suggested to start by allowing 60 seconds per carcass and to reduce this as equipment and operator experience improves to around 20 seconds per carcass.” (Antimicrobial Spray Treatment, 2005)
Suggestions for Establishing a Critical Limit for Food Safety Plan
Here are two ways to define a critical limit for this intervention, which may become a critical control point in the HACCP plan of a very small plant. Let’s assume lactic acid is used. Of course the same will apply for any other organic acid.
1. “Specify the length of time (i.e., seconds or minutes) that the carcass will be sprayed with 2% lactic acid.
2. Specify the volume of 2% lactic acid that will be applied to each carcass.
Also, note that enough 2% lactic acid should be sprayed onto the carcass surface so that the whole surface is dripping wet and some of it runs off.” (Antimicrobial Spray Treatment, 2005)
Suggestions for Monitoring a Critical Limit
“Here are two feasible methods for monitoring the Critical Limits suggested above.
1. Use a titration kit to measure acidity (% acid) after preparing a solution of 2% lactic acid. Follow the manufacturer’s instructions closely to get a valid measurement. Record the acidity of each batch of 2% lactic acid solution on a record sheet.
2. During preparation of 2% lactic acid, measure and record the amounts (volume or weight) of water and lactic acid that are mixed together. Mixing together the correct amounts of concentrated acid and water will ensure proper preparation of 2% lactic acid.” (Antimicrobial Spray Treatment, 2005)
Sanitizing Halo System to be developed
(Rodriguez, et al, 2004)
Pipek, et al, (2004) found unexpected benefits to their lactic acid carcass decontamination trails related to weight loss. The write, that “it was proved that the weight losses during cold storage were surprisingly lower in lactic acid-treated carcasses in comparison with control samples sprayed with water. The explanation of this effect can be found in changes of protein structure on the surface. The lactic acid treatment probably induces denaturation of the proteins on the surface and leads to pore closure; evaporation of water from the meat surface is reduced. The differences in weight losses between lactic acid-treated carcasses and controls were 0.6−1.0 % in the case of pork and 0.3−0.6 % in the case of beef carcasses. These differences are related to different tissues on the carcass surface. Whereas in beef carcass the muscle tissue prevails, the surface of pork half-carcass is covered by skin.” (Pipek, et al, 2004)
What is the difference between a large and a very small abattoir? The scale, the number of animals slaughtered per day, but this will make micro and food quality a bigger challenge in large plants. The biggest difference is in the equipment. Small plants don’t have access to the right equipment that makes very hygienic slaughtering possible. (By contrast, I have found very small abattoirs in New Zealand with an unexpected level of sophistication and very clever design which enables them to produce meat on-par with some of the biggest abattoirs in the world.) Despite the fact that it is reported that between 3000 and 5000 heads of cattle are being slaughtered at one abattoir in Lagos, they have no proper equipment and everything that is said about very small abattoirs apply to them. There is a place for carcass decontamination to prepare meat for processing.
Dickons, J. S., Hardin, M. D., and Acuff, G. R.. Microbial Inactivation Methods, Organic Acid Rinses. (1996) Microbial Inactivation methods, American Meat Science Association, 49th Annual Reciprocal Meat Conference.
Honey in cured meat formulations
By Eben van Tonder
18 March 2018
There is a legend in Africa, that it was the Khoisan who discovered natural fermenting honey in the hollow of a tree. According to the story, honey dripped from a beehive into the hollow of a tree filled with water. Honey naturally contains yeast. We have seen from Joshua Penny and Hottentot Fermentation Technology at the Cape of Good Hope, that mixing water and honey allows the yeast to multiply, living off the sugar and the air and producing as a consequence of their metabolism, water and carbon dioxide. Somehow the hollow branch must have been covered in such a way to seal it air tight and when the oxygen was consumed, the yeast changed from aerobic to anaerobic metabolism and instead of water, alcohol was produced resulting in the formation of mead.
Mead is one of the oldest ways of producing an alcoholic beverage and may be the forerunner of beer and wine. With its remarkable preserving ability and exceptional taste, I wondered how honey will work as an ingredient in formulating processed meat products. In order to evaluate this possibility, we look at its antimicrobial mechanisms and what happens if we change it slightly by either heating it or diluting it. Besides these, there is the obvious question if it is possible to imitate it by simply mixing glucose and fructose. This will be viable if the preserving power of honey is simply the result of its high sugar content. We should even be able to use something like molasses which is far less costly than honey and have the same results. Lastly, is there any possible negative health effects to be aware off. Let’s examine the issues.
SUGAR TOLERANT YEAST – A NATURAL CONSTITUTE OF HONEY
The first important point relates back to the fermentation of the honey to make the mead. Where does the yeast come from? Obviously, from the environment, but yeast also occurs naturally in honey. It comes from both the intestinal tract of the bee and the environment in the hive. The bee contributes more microorganisms to the honey besides yeast. “The intestine of bees has been found to contain 1% yeast, 27% Gram-positive bacteria including Bacillus, Bacteridium, Streptococcus and Clostridium spp; 70% Gram-negative or Gram-variable bacteria including Achromobacter, Citrobacter, Enterobacter, Erwinia, Escherichia coli, Flavobacterium, Klebsiella, Proteus, and Pseudomonas.” The primary sources of the sugar tolerant yeast found in the bees are flowers and soil and through the bee, it makes its way into the honey. (Olaitan, et al., 2007) For our consideration of honey as an ingredient in a processed meat product, the first important point to remember is that by adding honey to meat, one introduces a host of microorganisms, including yeast and appropriate hurdles must be built into the formulation to account for these.
THE PRESERVING POWER OF HONEY
Despite this, without adding water to the honey, not even the yeast does very much. In order to induce fermentation, water must be added. The fact that bacteria do not mission around and do what bacteria or other microorganisms do in honey is one of the remarkable characteristics of honey. “Most bacteria and other microbes cannot grow or reproduce in honey i.e. they are dormant and this is due to antibacterial activity of honey. Various bacteria have been inoculated into honey collected in airtight containers (aseptically), held at 20°C. The result showed loss of bacterial viability within 8–24 days.” (Olaitan, et al., 2007) This introduces its possible application in the meat industry as a preservative and bodes well for its use in our processed meat formulation despite the presence of microbes in the honey.
SALT AND HONEY PRESERVATION: HISTORICAL PRESIDENT
In Roman times two methods of preserving meat were practiced. On the one hand, they used dry cured salting. “First, they boned the meat and sprinkled it with crushed salt. After this had dried the meat enough to remove any noticeable dampness, they sprinkled on more salt and put the pieces in a container previously used for oil or vinegar. The pieces were arranged so they wouldn’t touch each other. Sweet wine was poured over the meat, and straw was placed on top. If snow was available, it was spread around the container.” (carolashby)
A second preservation technique that needed no salt was used in the wintertime. In this approach, the meat was coated with honey as a good antibacterial agent, sealed in an air-tight container, and stored in a cool place.” (carolashby)
Dr. S. Mladenov reports on the work of N. Yоуrіѕh (1949), Ѕоlntѕеv, and tells us that the ruling class from Rome (раtrісіаnѕ) used “rare game and fruit frоm dіѕtаnt аrеаѕ” in their lavish celebrations which were delivered in fresh condition and with “unchanged taste quаlіtіеѕ duе tо thеіr trаnѕроrtаtіоn іn hоnеу соntаіnеrѕ”. (Mladenov, S., 1967)
The oldest reference to the use of honey is probably from a ≈8000-year-old cave painting (see image below) which shows a figure clinging to three vines to retrieve honey from a cliffside hive. (Mitchell, B. A.; 2016)
The antimicrobial property of honey is very interesting and has been known for centuries. It was a well-established treatment for infected wounds “as long ago as 2000 years before bacteria were discovered to be the cause of infection. In c.50 AD, Dioscorides described honey as being “good for all rotten and hollow ulcers.” (Olaitan, et al., 2007) Еgурt, Аѕѕуrіа аnd аnсіеnt Grеесе hоnеу wаѕ uѕеd fоr еmbаlmіng dеаd bоdіеѕ аnd рrеѕеrvіng vаluаblе ѕееdѕ. Іn thе Руrаmіdѕ at Gіzа in Еgурt “а сhіld’ѕ соrрѕе wаѕ fоund рrеѕеrvеd іn а hоnеу соntаіnеr. Ассоrdіng tо thе hіѕtоrісаl dаtа, thе dеаd bоdу оf Аlехаndеr оf Масеdоnіа, whо hаd dіеd оf mаlаrіа іn Ваbуlоn (Аѕіа), wаѕ trаnѕроrtеd tо Масеdоnіа іn а соffіn fіllеd wіth hоnеу tо рrеvеnt іtѕ dесоmроѕіtіоn durіng thе lоng trаvеl. Ѕіmіlаrlу рrеvеntеd frоm bеіng dесоmроѕеd wеrе thе соrрѕеѕ оf Еmреrоr Јuѕtіnіаn аnd thе аnсіеnt Ѕраrtаn kіngѕ Аgеѕіроlіѕ аnd Аgеѕіlаuѕ. (Mladenov, S., 1967)
Over the years clear historical data pointed me to suspect a link between embalming and regular, every-day practices of meat preservation. See my articles on this under salt. This means that honey has been used for meat preservation for millennia, corroborated by the writings of Rome and Greece. Aristotle, for example (384–322 BC), when discussing different honeys, referred to pale honey as being “good as a salve for sore eyes and wounds”. (Olaitan, et al., 2007) This is interesting. Mr. Roy Oliver from Woody’s Consumer Brands tells the story of a man that he knew well who was well in his 80’s who used to put a small amount of honey in his eyes every day and into advanced age never had a problem with his sight. Roy himself takes a teaspoon of organic honey every day and swears to its ability to stay off colds and flue.
From the perspective of history, honey should to very well in a processed meat formulation from the standpoint of adding a powerful natural preservative, apart from the taste benefit. This is exactly the reason why the class of foods called processed meats exist namely its longevity and taste. So far, so good.
THE RISK OF INFANT BOTULISM TO INFANTS < 1 YEARS OLD
We are meat curing professionals and in considering honey as an ingredient in our curing mix to aid preservation, its efficacy must be judged against the priority organism in our world namely c. botulinum. The clear evidence is that it does not provide an effective hurdle against it. It is, in fact, only the spore-forming microorganisms that can survive in honey at low temperatures. “The spore count remained the same 4 months after. Bacillus cereus, Clostridium perfringes and Clostridium botulinium spores were inoculated into honey and stored at 25°C. The Clostridium botulinum population did not change over a year at 4°C.” (Olaitan, et al., 2007) This explains why c. botulinum poisoning, despite being very rare, still occurs from time to time through the ingestion of honey. Infants are particularly at risk. The fact is that “honey is a dietary reservoir of C botulinum spores for which there is both microbiological and epidemiological evidence. In order to minimise the risk of infantile botulism, it is recommended not to give honey to less than 1 year old. There is a widespread practice of administering honey or ‘ghutti’ (an herbal concoction mixed with honey) as a prelacteal feed to newborn babies among Asian families. In a study conducted in Pakistan, 15.6% of babies received honey as prelacteal feeds, often influenced by the elders in the family. A similar study from India reported most of the grandmothers and mothers believed in early feeding of newborn, within 2 h of delivery, by giving prelacteal feeds such as ghutti and honey.” (Abdulla, et al., 2012)
So, not only does honey offers no barrier against spore-forming microorganisms, but it contributes to illnesses such as infant botulism. This means that in designing any processed meat product with a long shelf life to be sold at refrigeration temperatures with honey as an antimicrobial agent and taste enhancer, one must use as starting point cured meat. In my own considerations, I will choose either sodium nitrite or nitrate, if a long curing time will be used, or sal ammoniac which I have found not only to be an excellent preservative but also resulting in a nice pinkish-reddish cooked cured meat colour. The honey, I hope, will also mask the slight astringent taste resulting from a 4% inclusion level.
Two questions follow. Can one dilute the honey and can one heat it? How do these affect its viability for use in a cured meat formulation?
DILUTING HONEY AND THE IMPACT ON ITS PRESERVING ABILITY
The question is relevant due to its price. It is reported that “if honey is diluted with water, it supports the growth of non-pathogenic bacterial strains and killing of dangerous strains. Solution of less than 50% honey in water sustained bacterial life for long periods but never exceeding 40 days. It has therefore been concluded that the probability of honey acting as a carrier of typhoid fever, dysentery, and various diarrhea infections is very slight.” (Olaitan, et al., 2007) The issue in the meat industry will be not only the pathogens but also the yeast and lactic acid bacteria, responsible for spoiling the meat. Tests will have to be conducted in order to determine the viability of using diluted honey as a meat preservative.
HEATING THE HONEY AND THE IMPACT ON ITS PRESERVING ABILITY
Heating the honey to boiling temperature before use in meat is a very logical thing to do since it will kill the yeast and other microorganisms naturally occurring in the honey. In producing certain hams, gammons, and bacon, standard production core temperatures range from between 40 deg C to as high as 71 or even 80 deg C. Does this alter the characteristics of the honey in any way?
This question was studied in 1967 by the researcher, Mladenov. He designed an experiment where honey from various environments was compared against the efficacy as preservative against a control of a mixture of 40% gluсоѕе аnd 30% fruсtоѕе іn a ѕаlіnе solution. The control test was done in order to еlіmіnаtе thе еffесt оf ѕugаr аѕ thе оnlу рrеѕеrvіng соmроnеnt (fасtоr) of honey. The following were tested namely, crop seeds (beans, barley, wheat, rye, and maize) and animal products (kidney, muscle, liver, fish, chicken eggs, frogs, snakes).
The actual experiment was then set up as follows. “They wоuld роur hоnеу іntо ѕtеrіlе glаѕѕ dіѕhеѕ, аddіng thеrеtо frеѕh ѕееdѕ оr аnіmаl оrgаnѕ frоm frеѕhlу ѕlаughtеrеd аnіmаlѕ. Тhеn they wоuld сlоѕе tіghtlу thе dіѕhеѕ wіth glаѕѕ lіdѕ аnd lеаvе thеm fоr а сеrtаіn реrіоd оf tіmе іn thе оffісе undеr аmbіеnt соndіtіоnѕ.” (Mladenov, S., 1967)
In order to understand why honey has such a powerful preserving ability against the control which had far less, he also tested hоnеу “thаt hаd bееn рrеvіоuѕlу hеаtеd tо bоіlіng.” He reports that “suсh hоnеу hаѕ nо рrеѕеrvаtіvе еffесt оn thе tеѕt оrgаnіѕmѕ, whісh undеrgо quсk dесоmроѕіtіоn thеrеіn. Веѕіdеѕ, thіѕ tуре оf hоnеу turnѕ ѕоur wіthіn а vеrу ѕhоrt реrіоd.” (Mladenov, S., 1967) This effectively rules out boiling or heating the honey too much even though he does not give the exact temperatures. The temperature he suggests where this happens is > 60 deg C.
Let’s look more closely at the mechanisms of preservation in honey to gain insight into this.
MECHANISMS OF PRESERVATION
Honey has been reported to have an inhibitory effect to around 60 species of bacteria including aerobes and anaerobes, gram-positives and gram-negatives. An antifungal action has also been observed for some yeasts and species of Aspergillus and Penicillium, as well as all the common dermatophytes. (Olaitan, et al., 2007)
“The numerous reports of the antimicrobial activities of honey have been comprehensively reviewed. Honey has been found in some instances by some workers to possess antibacterial activities where antibiotics were ineffective. Pure honey has been shown to be bactericidal to many pathogenic microorganisms including Salmonella spp, Shigella spp; other enteropthogens like Escherichia coli, Vibrio cholerae and other Gram-negative and Gram-positive organisms. High antimicrobial activity is as a result of osmotic effect, acidity, hydrogen peroxide and phytochemical factors.” (Olaitan, et al., 2007)
“The clearing of infection seen when honey is applied to a wound may reflect more than just antibacterial properties. Recent research shows that the proliferation of peripheral blood B-lymphocytes and T-lymphocytes in cell culture is stimulated by honey at concentrations as low as 0.1%; and phagocytes are activated by honey at concentrations as low as 0.1%. Honey (at a concentration of 1%) also stimulates monocytes in cell culture to release cytokines, tumour necrosis factor (TNF)-alpha, interleukin (IL)-1 and IL-6, which activate the immune response to infection. A wide range of MIC values (the minimum concentration of honey necessary for complete inhibition of bacterial growth) have been reported in studies comparing different honeys tested against single species of bacteria: from 25% to 0.25% (v/v) 35; >50% to 1.5% (v/v) 20% to 0.6% (v/v), 50 to 1.5% (v/v).” (Olaitan, et al., 2007)
Honey’s Impact on Water Activity
“The osmotic effect of honey has been described. Honey is a supersaturated solution of sugars, 84% being a mixture of fructose and glucose. The strong interaction of these sugar molecules will leave very few of the water molecules available for microorganisms. The free water is measured as the water activity (aw). Mean values for honey have been reported, from 0.562 to 0.62.” (Olaitan, et al., 2007)
“Although some yeasts can live in honeys that have high water content, causing spoilage of the honey, the water activity (aw) of ripened honey is too low to support the growth of any species and fermentation can occur if the water content is below 17.1%. Many species of bacteria are completely inhibited if water activity is in the range of 0.94 to 0.99. These values correspond to solutions of a typical honey (aw of 0.6 undiluted) of concentrations from 12% down to 2%(v/v). On the other hand, some species have their maximum rate of growth when the (aw) is 0.99, so inhibition by the osmotic (water drawing) effect of dilute solutions of honey obviously depends on the species of bacteria.” (Olaitan, et al., 2007) besides this, Mladenov showed that this accounts very little for its overall preserving effect by his experiments where honey was heated and its inhibitory effect on spoilage severely diminished. Even though it is true, generally, that sugar reduce water activity, it does not account for the specific preserving effect of honey.
“Honey is characteristically acidic with a pH of between 3.2 and 4.5, which is low enough to be inhibitory to many animal pathogens. The minimum pH values for growth of some common pathogenic species are Escherichia coli (4.3), Salmonella spp (4.0), Pseudomonas aeruginosa (4.4), Streptococcus pyogenes(4.5).” (Olaitan, et al., 2007) It undoubtedly contributes to its preserving ability, but it is not the full story. The question is not what is the pH of the honey, but to what degree does it affect the pH of the meat it preserves and now it becomes a matter of dosage, time and temperature which of course will vary. Mladenov studied the impact of pH on the preserving ability of honey and found that even after neutralising the acids, it still retained its preserving ability. He аѕсеrtаіnеd “thаt hоnеу соntаіnѕ thе fоllоwіng асіdѕ: fоrmіс, асеtіс, tаrtаrіс, сіtrіс, охаlіс, рhоѕрhоrіс аnd thе lіkе.” He nеutrаlіzеd thе асіdѕ “bу соnnесtіng thеm wіth а саrbоnаtе rаdісаl (ѕоdіum bісаrbоnаtе) аnd саuѕеd аlkаlі оr nеutrаl (рН – 7) rесtіоn іn the hоnеу”. He found that it rеtаіned іtѕ рrеѕеrvаtіvе рrореrtіеѕ. Тhіѕ shows that its рrеѕеrvаtіvе еffесt іѕ nоt duе tо thе асіdѕ alone even though the acids probably plays a small role. (Mladenov, S., 1967)
Ferments are fermenting agents or enzymes. “Ноnеу соntаіnѕ thе fоllоwіng fеrmеntѕ:
іnvеrtаѕе enzyme “which dіѕіntеgrаtеѕ ѕuсrоѕе in the honey during maturation іntо twо ѕіmрlе ѕugаrѕ – fruсtоѕе аnd gluсоѕе, bоth оf рlаnt аnd аnіmаl оrіgіn – frоm nесtаr аѕ а fееdѕtосk аnd frоm thе bее glаnd;” (honeypedia)
“Hydrogen peroxide is produced enzymatically in honey. The glucose oxidase enzyme is secreted from the hypopharyngeal gland of the bee into the nectar to assist in the formation of honey from the nectar. The hydrogen peroxide and acidity produced by the reaction:
“Glucose+H2O+O2___Gluconic acid + H2O2 serve to preserve the honey. On dilution of honey, the activity increases by a factor of 2500 to 50,000, thus giving “slow-release” antiseptics at a level, which is antibacterial but not tissue damage. Other workers have however shown a reduction in antibacterial activity of honey on dilution to four times.” (Olaitan, et al., 2007)
“Phytochemical factors have been described as non-peroxide antibacterial factors, which are believed to be many complex phenols and organic acids often, referred to as flavonoids. These complex chemicals do not break down under heat or light or affected by honey’s dilution. The stability of the enzyme varies in different honey. There have been reports of honeys with stability well in excess of this variation showing that there must be an additional antibacterial factor involved (i.e. do not break down under heat or light or affected by dilution). The most direct evidence for the existence of non-peroxide antibacterial factors in honey is seen in the reports of activity persisting in honeys treated with catalase to remove the hydrogen peroxide activity.” (Olaitan, et al., 2007)
“Several chemicals with antibacterial activity have been identified in honey by various researchers. Antibacterial activity of honey varies between different types of honey. It has been observed that there are different types of honey and a method has been used to determine the “inhibine number” of honey as a measure of their antibacterial activity. The “inhibine number” is the degree of dilution to which a honey will retain its antibacterial activity representing sequential dilutions of honey in steps of 5 percent from 25% to 5%. Major variation seen in overall antibacterial activity are due to variation in the level of hydrogen peroxide that arises in honey and in some cases to the level of non peroxide factors. Hydrogen peroxide can be destroyed by components of honey, it can be degraded by reaction with ascorbic acid and metal ions and the action of enzyme catalase which comes from the pollen and nectar of certain plants, more from the nectar.” (Olaitan, et al., 2007)
“Although it appears that the honey from certain plants has better antibacterial activity than from others, there is not enough evidence for such definite conclusion to be justified because the data are from small numbers of samples. Thus it is important that when honey is to be used as an antimicrobial agent, it is selected from honeys that have been assayed in the laboratory for antimicrobial activity. It is also important that honeys for use as an antimicrobial agent be stored at low temperature and not exposed to light, so that none of the glucose oxidase activity is lost although all honey will stop the growth of bacteria because of its high sugar content.” (Olaitan, et al., 2007)
Неnсе, his ехреrіmеntѕ wіth dіffеrеnt tуреѕ оf hоnеу ѕhоw thаt іtѕ рrеѕеrvіng еffесt оn аnіmаl, vеgеtаblе аnd оthеr рrоduсtѕ ѕubјесt tо ѕроіlаgе іѕ duе tо thе рrеѕеnсе оf antіbіоtіс ѕubѕtаnсеѕ (рhуtоnсіdеѕ) іn hоnеу, nоt јuѕt bесаuѕе оf thе hіgh соnсеntrаtіоn оf ѕugаrѕ thеrеіn.” (Mladenov, S., 1967) Mladenov is widely quoted even though I have not been able to find any reference to phytoncides. The word Phyton is an old English term from the 1913’s and is derived from the Ancient Greekφυτόν(phutón, “plant”) and –cide is a suffix meaning “killer of.” From this, it would seem that a better word had to be found and it is understandably not in use any longer. The concept, however, is still alive and well in modern scientific literature namely antimicrobial and antiviral properties from plant origin.
An excellent review of the current research on the subject and its history and place in modern research is Cowan, M. M. (1999). I list some of the important ones.
Simple phenols and phenolic acids such as caffeic acid, found in herbs such as tarragon and thyme, has been shown to be effective against viruses, bacteria, and fungi. Catechol, found in plants including onions, apples, and in crude beet sugar coal and in leaves and branches of oak and willow trees and pyrogallol (from Myriophyllum spicatum, native to Europe, Asia, and north Africa; a submerged aquatic plant, which grows in still or slow-moving water) are both hydroxylated phenols, shown to be toxic to microorganisms.
Quinones which are found throughout nature and are characteristically highly reactive. “In addition to providing a source of stable free radicals, quinones are known to complex irreversibly with nucleophilic amino acids in proteins, often leading to inactivation of the protein and loss of function. For that reason, the potential range of quinone antimicrobial effects is great. Probable targets in the microbial cell are surface-exposed adhesins, cell wall polypeptides, and membrane-bound enzymes. Quinones may also render substrates unavailable to the microorganism. As with all plant-derived antimicrobials, the possible toxic effects of quinones must be thoroughly examined.”
Flavones, flavonoids, and flavonols which has already been discussed. I can add that “flavonoid compounds exhibit inhibitory effects against multiple viruses.
The next group of natural bactericides and viricides are alkaloids and deserve far more and detailed treatment than I am allowing for in this short article. This important class of naturally occurring chemical compounds contains mostly basic nitrogen atoms. It is suggested that plants may have evolved it as a defense mechanism against herbivores. The group of alkaloids is called pyrrolizidine alkaloids (PAs). Recently there has been work done that indicates that PA’s, at low levels and over a prolonged time period poses a risk to the health of animals and humans. Mitchell reports, “PAs are (also) passed down the human food chain from various sources; certain herbal teas and honey contain large amounts of PAs. Long-term consumption of low levels of PAs in food can lead to liver cirrhosis and cancer.” (Mitchell, B. A.; 2016) This warrants further investigation.
Honey’s antibacterial properties on different microorganisms
“The empirical application of honey on open wounds, burns or use of honey in syrups does show that it stops the growth of many microorganisms. Many of these microorganisms have been isolated and identified.
Mundoi et al discovered that the antimicrobial activity of honey was more with Pseudomonas and Acinetobacter spp, both with resistance to some antibiotics like gentamicin, Ceftriazone, Amikacin and Tobramicin than other bacteria tested. This was attributed to inhibitory effect of ascorbic acid in honey on aerobic microorganisms. Staphylococcus aureus and Streptococcus spp were also found to be sensitive to honey.
Undiluted honey has been found to stop the growth of Candida spp while Clostridium oedemantiens, Streptococcus pyogenes remained resistant. Some species of Aspergillus did not produce aflatoxin in various dilutions of honey while honey has been found to stop the growth of Salmonella, Escherichia coli, Aspergillus niger and Penicillium chrysogenum.
Wounds infected with Pseudomonas, not responding to other treatment, have been rapidly cleared of infection using honey, allowing successful skin grafting. Obaseki et al found that Candida albicansstrains are sensitive to honey while Obi et al reported the inhibitory effect of pure honey against local isolates of bacteria agents of diarrhea. At concentration of 50% and above, honey excellently inhibited the growth of Escherichia coli, Vibrio cholrae, Yersinia enterocolitica, Plesiomonas shigelloides, Aeromonas hydrophila, Salmonella typhi, Shigella boydi and Clostridium jejuni.” (Olaitan, et al., 2007)
This then is a short introduction to the antimicrobial action of honey. There is enough evidence to suggest my first approach to use undiluted, natural (previously unheated) honey in my processed meat formulation in conjunction with salt and possibly nitrite/ sal ammoniac cured meat.
APPLICATIONS TO MEAT
Accurate instructions on how to prepare a ham with honey are “given in De re coquinaria – the one and only remaining cookbook, which was supposed to be written by a Roman gourmet Apicius. Reading the recipes, we can assume that sweet hams were very popular. To make the meat sweet, it was cooked in water with a large number of figs. The use of those fruits was recommended in all the recipes for hams in De re coquinaria, and the phrase ut solet (as usually) given in one of the recipes, shows that it was a common practice. Perna could be put into this kind of stock, flavoured with bay leaves. When it was almost soft, the skin was removed, the meat was cut partway and honey was poured inside. Next, it was wrapped in a pastry made from flour and olive oil, and baked in an oven. The dish was served hot.” (Zofia Rzeźnicka, et al, 2014)
This is exactly the basic strategy I am considering for my first honey based formulation. I plan to produce it this week and will report on the results.
Honey, previously unheated and undiluted seems to be an excellent candidate for inclusion in a processed meats product. The product will no doubt be expensive due to the price of honey, but this will also mean that it will not be consumed every day over a long period which is good due to the presence of PA’s in honey.
The Khoisan may have, according to their legends, introduced us to mead. Our ancestors progressed this to its use as a preservative. Later, as food became an art in various parts of the world, various dished with honey in them, became the food of nobility and rulers. It is an exciting prospect to continue this great tradition and work with it as a component of a processed meat formulation.
Abdulla, C. O., Ayubi, A., Zulfiquer, F., Santhanam, G., Ahmed, M. A. S., & Deeb, J. (2012). Infant botulism following honey ingestion. BMJ Case Reports, 2012, bcr1120115153. http://doi.org/10.1136/bcr.11.2011.5153
Cowan, M. M. (1999). Plant Products as Antimicrobial Agents. Clinical Microbiology Reviews, 12(4), 564–582.
Mitchell, B. A.. 2016. Pyrrolizidine Alkaloids in Honey. Agricultural and Food Chemistry. August 1, 2016.
Mladenov, S.. 1967. The Preservative Effect of Honey. Pchelarstvo Magazine, issue12/ 1967.
Olaitan, P. B., Adeleke, O. E., & Ola, I. O. (2007). Honey: a reservoir for microorganisms and an inhibitory agent for microbes. African Health Sciences, 7(3), 159–165.
Zofia Rzeźnicka, Maciej Kokoszko, Krzysztof Jagusiak (Łódź). 2014. Cured Meats in Ancient and Byzantine Sources: Ham, Bacon and Tuccetum1. Studia Ceranea 4, p. 245–259
Joshua Penny and Khoikhoi Fermentation Technology at the Cape of Good Hope
By Eben van Tonder
10 March 2018
Fermentation is used to produce alcohol and preserve meat. My interest in its history first came to me when I read the sixty-page pamphlet titled, “The life and adventures of Joshua Penny,” published in 1815. He is “arguably the most fascinating character to have lived like a Robinson Crusoe in the caves of Table Mountain, allegedly for 14 months.” (geocaching) I am a passionate student of food and food science. Penny’s account of his adventures offers intriguing glimpses into ancient foods and food science employed at the Cape of Good Hope, as it was done around the world.
Joshua Penny was American, born from a poor family at Long Island, New York on 12 September 1773 who started out as a youngster trying his fortune at sea, but his independence was cruelly terminated when he was “impressed” into the British Navy. Impressment was an unpopular and harsh system of forced recruitment that allowed the Royal Navy to compel able-bodied men, including American seamen, to work as crew on British warships. The system was justified and maintained at the time as a necessary means to ensure the strength of the British navy and the survival of the British Empire.” (geocaching)
“He soon developed an enduring hatred for the British. In June 1795, as an impressed sailor, the unhappy Joshua became an incidental participant in the first British occupation of the Cape. Soon after landing he and others succeeded in escaping from the Brits. The Dutch defenders of the Cape welcomed them “with Constantia wine and Mutton tails of the best quality.” (geocaching) When asked why they deserted they answered that they were impressed into service and wish to return home. After a royal time at the Cape of Good Hope, while the Dutch were still in control, the governor gave them advance warning of his intention to surrender to the British with newly arrived reinforcements. They left the Cape, well stocked with supplies, courtesy of the governor. They moved from farmhouse to farmhouse.
Calabashes to store water and brandy for visitors
Penny gives the first important clue as to the technology of the local people when he writes, “water was so rarely found, that we took that in calabashes.” (Penny, 1815: 18) He also mentions that whenever they knocked on farmhouse doors, the occupants came to the door with wine or brandy, even though they did not dare entertain them. This is, of course, the first mention, of what I believe was technology common at the Cape. The Europeans brought the technology to make wine and brandy but soon we would learn about local indigenous technology to produce something similar.
Penny and his mates arrived at a location described as being 100 miles from the Cape at the head of the Klanvis River, 50 miles from where it reaches the sea. After doing some work for a farmer, news arrived from the Cape that “the English had possession of the district, and that if any inhabitant while under English laws, should entertain a deserter, he should be transported to Botany Bay for life. The farmer told Vanderwiet and Penny, that he was in such dread of the British tyrannical laws, he dared not entertain them any longer, but advised them to travel into the interior, as long as inhabitants were to be found.” (Penny, 1815: 20)
The Peoples of the Cape
Penny’s tale now becomes enlightening as far as the technology of the indigenous people is concerned. He encounters two groups at the Cape. One is the San (Bushman) and the other is the Khoikhoi (Hottentot). The San or the Bushman is the famous nomadic hunter/ gatherers from Southern Africa and the Khoikhoi or Hottentot migrated south from as far north as Zambia or some suggest, even East Africa with their herds of fat tale sheep. Some scholars maintain a single ancestry for the two peoples and their footprints can be seen across the Southern African region dating back many thousands of years. Their dietary practices would become an important glips into prehistoric food.
Penny’s tale where he introduces these people is a shocking glimpse into frontier life but is a subject for a different discussion on a separate forum. He writes that” the Bosjesmen (Bushman) was at that time making inroads on the frontiers, and Penny’s small company was very acceptable to the Dutch party who were forming to act on the defensive, at Cold Bokkeveld. People at this place very willingly entertained them and joined them on their march to attack the enemy’s camp”, i.e. the Bushmen. The Bushmen murdered a woman and their children and in turn, the frontier farmers along with some 40 or 50 Khoikhoi hunted the Bushman down, killed the men and woman and kidnapped the children. Penny said that during their pursuit of the Bushman, they followed old game paths for about three weeks. He described the Khoikhoi as people who “rode on bullocks and subsisted on flour conveyed in sacks, wild honey, and roots resembling American ground nuts.”
This gives us the first interesting glimpse into the dietary habits of the Khoikhoi (Hottentot). Neil Rusch, an expert in matters pertaining to archaeology, literature and a mead producer extraordinaire, offers the following insights on the possible identity of the nuts. He writes, ““the roots resembling American groundnuts” are likely to be an assortment of geophytes of the iris, or iridaceae family. Residues of these plants, corm casings and stems particularly, are found in archaeological contexts suggesting that they provided a significant source of carbohydrate.” (personal correspondence with Neil)
Penny eventually returns to Cape Town, only to be imprisoned on the suspicion of being a mutineer. Fearing the penalty of death, he steadfastly denies all such charges, claiming to be Jonas Inglesburg. Eventually the Fiscal is ordered by the admiral to ship those who did not confess on board the first merchant ship to arrive.
He ends up serving on various warships and in various campaigns, always looking for an escape back to land again. He became a crewman on the HMS Sceptre. “Historical records show that the Sceptre entered Table Bay during October 1799. This means that Penny was 26 years old at this time.” (geocaching) Here he picked a fight with a bully when the American crewmen celebrate their 4th of July independence day. He saw a perfect opportunity to faint injury to make it to land and escape, which is exactly what he did. He escapes to Table Mountain where he lived in several caves and became one of the people in history to have survived longest on this inhospitable mountain without apparent support from the Cape Town community. There is an account in records of the Mountain Club of South Africa of slaves who wandered around at the top for a while before returning due to the lack of food. There were many accounts of slaves escaping to the mountain, but they all lived lower down and frequently made it back to town to get provisions, either stealing or being supplied by fellow slaves or well-doers.
Joshua “resolved that he would rather be “breakfast for a lion” than be taken on another floating dungeon. He mentions encountering goats, antelopes, hyenas, leopards, and baboons during his climb to the summit which took him more than four days. He then took up residence in view of the Western Ocean in a cavern near a spring of good water.” (geocaching)
Dried Meat and Honey Mead
Joshua was better equipped than most after the time he spent with the Khoikhoi who taught him their field craft. Soon after his escape to the mountain, he discovered that wild honey was plentiful and the Khoikhoi taught him how to retrieve it. He works out how to kill game by forcing them off a cliff. The skin he uses to cover him; the meat he cut into thin strips and hangs on sticks which he put into “crevices in his habitation.” This is a well-known bushman and I am sure, Khoikhoi way of drying meat.
The dried meat was first boiled again before it was consumed, something which seemed to have been the practice with dried meat early on. I encountered this still being practiced to this day in Nepal. He writes that “while among the Hottentots he had learned their method of making a very pleasant beverage resembling metheglin,” a spiced variety of mead. He reports that he was “fortunate enough to find an old hollow tree, which he cut off with his knife, and seized a green hide on one end for a bottom. Into this tub honey and water was put to stand twenty-four hours; then was added some pounded root to make it ferment. This root, in use among the frontier Hottentots, does not resemble any of his acquaintances in America but makes an excellent drink in this preparation.”
It is at this point that Prof. Kevin Dunn’s Caveman Chemistry comes in handy to explain what the technology involved. He writes that “honey is a complex, concentrated solution of sugars, mostly glucose and fructose, the solutes, which add colour, flavour, and aroma to the solvent in which they are dissolved” which, in the case of Joshua Penny, happens to be water. Honey in its pure form does not spoil since if a solution is concentrated enough, microorganisms can generally speaking not live in it. Honey is simply too concentrated to spoil.
Add some water to it and it begins to spoil as the microorganisms begin to eat the sugar. “Most animals need air to live, and when they eat sugar, they excrete water and carbon dioxide. Yeast, however, is able to live aerobically and anaerobically, or with or without oxygen. “When there is plenty of air, they digest sugars aerobically as most other organisms do. But in the absence of air, they are able to partially digest sugars in aqueous solutions, excreting carbon dioxide, as usual, but (Caveman Chemistry, p. 54-56) instead of excreting water, they excrete ethanol (ethyl alcohol). The resultant solution is called mead.
“The maturation of a mead depends in large part on the concentration of honey in the original solution which is called must or worth. Let’s describe what is happening as follows. Joshua watered the honey down and placed it in his make-shift container which he left for 24 hours. Let’s assume that he used a lot of water. The yeast finds itself in yeast heaven with plenty of sugar to eat, but not so concentrated that they cannot thrive and plenty of oxygen to breath. They produced carbon dioxide and water from the glucose and oxygen. This allowed the yeast to multiply. After a day, he sealed the container with some pounded root. This cut off the oxygen supply. What will happen now is that the oxygen, which the yeast continues to use, runs out before the sugar does and the yeast moves into anaerobic mode, consuming glucose (but not oxygen) and producing carbon dioxide and ethanol. Carbon dioxide is a gas and the pressure builds in his make-shift vessel. If there was a way to let the gas escape without allowing oxygen back in, a dry, non-sweet mead is produced since the sugar is effectively all used up in the process.
This, however, is not exactly what I suspect happened. He had no access to fancy fermentation locks which is effectively a one-way valve which allows gas to escape, but no gas to return back into the container. Joshua would not have been very liberal in the water he added which meant that “the yeast reproduced more slowly because the concentration of honey is higher. As the sugar is consumed, the alcohol concentration rises, eventually to a level which is toxic even to yeast, which is, in effect, stewing in their own juices. They die and fall to the bottom, and under these conditions, a sweet mead results because of the leftover sugar. The sweet mead is more alcoholic than the dry mead because all the sugar that can be converted to alcohol will have been.” (Caveman Chemistry, p. 56, 57)
Penny’s mention of the pounded root used to seal the wine and water in and aid in the fermentation is fascinating. I wondered what this could have been and will I be able to identify it.
After quite a bit of research, I was again finally put on the right track by Neil Rusch who introduced me to a Namaqua root called “bierwortel” which, according to Willem Steenkamp in his Land of the Thirst King, has a burning taste if taken raw but is a good stand-in for yeast. (personal correspondence with Neil Rusch). This lead me to the work of Skead (2009) who summarised the historical record of plants as mentioned by travelers. In these lists, he has a section dedicated to people who traveled through Namaqualand between 1661 and 1877 and here he offers the solution to my question. One of the roots he mentions was called “moerwortel.” Its scientific name is Glia prolifera (Burm.f.) B. L. Burtt (as Peucedanum gummiferum (L.)Wijnands). It is described in a 1794 record by C. P. Thunberg, as an umbelliferous plant, the root of which, dried and reduced to powder, they mix with cold water and honey in a trough, and after letting it ferment for the space of one night, obtain a species of Mead.” (Skead, 2009: p59 and DSAE) In the Khoisan language, it was called gli /ɡli(ː)/.
(Van Wyk, et al, 2009)
In 2009 it was shown that there are altogether three species of Glia. Glia prolifera occurs on Table Mountain and is in all likelihood the root that Penny used in his honey fermentation. The other one is Glia decidua, occurring in the Koue Bokkeveld but also to a lesser extent Glia prolifera. The third one is Glia pilulosa, but it is only found much further to the east (Van Wyk, et al, 2009) and is probably not what Penny encountered.
(Van Wyk, et al, 2009)
“Bierwortel” of Willem Steenkamp is nevertheless still a plant of interest, but I think we have nailed the exact plant used by Penny on Table Mountain. All that remains now is some great fieldwork to go and find the plant for myself on Table Mountain. I am planning to hike to the most likely caves where Penny made his abode and will be looking for Glia in the area of the caves.
Joshua Penny lived the most amazing existence for at least 14 months before returning to False Bay. He learned that “a strong north-westerly gale hit Table Bay shortly after his escape and the Sceptre sank with terrible loss of life at Woodstock Beach on 5 November 1799.” This extraordinary young man became one of the first people to fire a torpedo back home in the 1812 British war with America. Similarly to the shocking reality of frontier life, this is a discussion for another day. 🙂 For our purposes, the pictures he paints from his adventures introduced us to ancient meat preservation and the Khoikhoi technology to produce alcohol from honey. Like Joshua Penny himself, this is legendary and inspires recipes and many of my own adventures.
Dunn, K. M.. 2003. Caveman Chemistry. Universal Publishers.
Penny, J. 1815. The Life and Times of Joshua Penny. Published by the autor.
Skead, C. J.. 2009. Historical plant incidence in Southern Africa. SANBI.
Van Wyk, B-E, Tilney, P.M., Magee, A.R.. 2009. A revision of the genus Glia (Apiaceae, tribe Heteromorpheae). Department of Botany and Plant Biotechnology, University of Johannesburg, Johannesburg, South Africa, Received 20 August 2009; received in revised form 3 November 2009; accepted 4 November 2009. South African Journal of Botany 76 (2010) 259–271.
Also, see Bacon & the Art of Living, Chapter 11.08: Erythorbate
In meat curing, sodium erythorbate [E316] (C6H7NaO6) functions as an antioxidant, to increase the rate of nitrite reduction to nitric oxide which reduces the amount of residual nitrite in cured meat after curing sufficiently took place. In the modern curing plant, speeding up the formation of nitric oxide from nitrite is important because it speeds up the curing time, but far more important than this, it reduces the nitrite levels left in the meat after curing. Nitrite itself, at the minuscule levels used in meat curing, is not dangerous to human health, but “unreacted” nitrite forms n-nitrosamines during frying and in the stomick which have been linked to the development of various cancers. The meat industry responded to this by including either ascorbate (Vitamin C) or erythorbate in curing mixes as anti-oxidants. Including either ascorbate or erythorbate in bacon is one of the ways that bacon is changed into a safe food. (1) (see Regulations of Nitrate and Nitrite post-1920: the problem of residual nitrite)
Ascorbic acid and erythorbate have been widely used in recent years to improve the color of cured meats. Watts and Lehman (1952a) found that 0.1 percent ascorbic acid added to meats caused better color development when the meat was heated at 70°C. or frozen at -17°C. These workers (1952b) observed that hemoglobin did not react with ascorbic acid in the absence of oxygen. Ascorbic acid reduced methemoglobin and promoted the reduction of nitrite to nitric oxide. In the presence of oxygen, an undesirable side reaction occurred in which the green pigment choleglobin was formed. According to Hollerbeck and Monahan (1953), the beneficial effect of ascorbic acid in curing meat is due to the reduction of nitrogen dioxide to nitric oxide. Kelley and Watts (1957) observed that cysteine, ascorbic acid, and glutathione were capable of promoting the formation of nitric oxide hemoglobin, regenerating this pigment on surfaces of faded meat and protecting surfaces of cured meat from fading when exposed to light.” (Cole, 1961) (Reaction Sequence)
Erythorbate is many times more expensive than ascorbate, making this a favourite for inclusion in curing brines and accounts for its widespread use. Chemically, erythorbic acid’s sodium salt is sodium erythorbate. We will use erythorbate in this article to refer to either one of the two forms.
Erythorbic acid is a stereoisomer of ascorbic acid (vitamin C), meaning that the two compounds differ only in the spatial arrangement of their atoms. Previously it was called isoascorbic acid, D-araboascorbic acid. (Walker, R)
I wondered where and how this compound was discovered and if it occurs naturally.
The isolation and identification of vitamin C or ascorbic acid at the beginning of the 1930’s was a big deal. It solved the riddle of the anti-scurvy agent which eluded humans for centuries (see Concerning the Discovery of Ascorbate) and after its identification, science moved to learn everything there was about it. The most urgent question now became its synthesis which would allow it to be produced in massive quantities, in the cheapest possible way, offering untold wealth for those who would achieve this.
The priority was justified. Scurvy was a widespread, universal problem, not just for the navy. Notes by Tamango Ltd., On the prevention of scurvy among native workers; Oranges and Orange Juice, published in May 1936 deals with the prevalence of scurvy among the native population and its impact on industries like gold and diamond mining in the Free State, Gauteng and Northern Cape and the sugarcane industry in Natal. It even deals with scurvy and it’s widespread occurrence among native school children. It offered as a solution to vitamin C deficiency which, according to the authors, hampered the growth of the economy of the Union of South Africa, the mass production of citrus fruit and instead of exporting it all, to make this available for industry to feed its workforce.
Apart from being highly informative on another subject of great interest to me namely the traditional diets of native populations across the world and the negative impact of colonization on these societies, it highlights the priority that vitamin C had in our world in the late 1800’s and early 1900’s in the context of finding the best and cheapest available source, especially after its isolation and identification in the early 1930’s. (2)
ARABO-ASCORBIC ACID OR ERYTHORBIC ACID
In 1933 and 1934, researchers showed that synthesised d and l-ascorbic acid and prepared synthetic analogues. One of these they called arabo-ascorbic acid (from arabinosone). The product was synthetically derived from an osone (an osone is a compound that contains two alpha carbonyl groups and is obtained by hydrolyzing an osone). (Ault, 1933 and Baird, 1934) The reason for their work has been a well-established method by this time in the development of new pharmaceutical medicines. According to this method, an initial compound is identified (referred to as the lead compound). This compound possesses the activity of interest. Such an activity of interest may exist in the particular compound along with some undesirable characteristics such as toxicity, unsuitable half-life or poor availability (in vivo). A process is then embarked on to develop and analyze analogs of such a lead compound and to evaluate their different characteristics. (Gutte, B.. (Ed); 1995: 396)
It seems that it was the German chemists, Kurt Maurer and Bruno Schiedt who, in August 1933, were the first researchers to have synthesized erythorbic acid. (Maurer and Schiedt, 1933) It is estimated to be only one-twentieth as effective as ascorbic acid (Daniel and Munsell; 1937: 6) and is not capable of preventing scurvy but it nevertheless possesses great oxygen reducing characteristics which makes it ideal for industrial application due to its ease of production.
The researchers Takahashi, T., et al. became the first to report on the production of D-araboascorbic acid from penicillin. This was important since it showed that erythorbate was indeed a naturally occurring product. They refer to Isherwood, et al. who also found it in cress seedling in D-altrono-r-lactone solution and in the urine of rats injected with D-mannono-r-lactone.
A strain of Penicillium which they isolated from soil produced D-araboascorbic acid (erythorbic acid) from D-glucose, D-gluconic acid and sucrose. (Takahashi, T., et al. 1960)
Today it is synthesized by a reaction between methyl 2-keto-D-gluconate and sodium methoxide. The method of synthesizing it from sucrose or by strains of Penicillium are still in use.
The exact meaning of the prefix “erythor” eludes me and there is no connection with the similar sounding prefix, “eryhro” meaning red which would be a tidy connection with meat curing, but several sources refute this.
Dr R. Walker, Professor of Food Science, Department of Biochemistry, University of Surrey, England, offers the following information: “Erythorbic acid (syn: isoascorbic acid, D-araboascorbic acid) is a stereoisomer of ascorbic acid and has similar technological applications as a water-soluble antioxidant. This compound was previously evaluated under the name isoascorbic acid; at the last evaluation an ADI of 0-5 mg/kg b.w. was allocated, based on a long-term study in rats, and a toxicological monograph was prepared. The name was changed to erythorbic acid in accordance with the “Guidelines for designating titles for specifications monographs.” (Dr. R. Walker)
This is only a brief introduction. In my experience, there is little difference in curing time between the use of erythorbate or ascorbate and the price difference is material. I have noticed that some producers include a mixture of both compounds in their brine preparations. I would love to know the exact reason why this is done and to have a look at the data. Another aspect of great interest is its characteristic as a potent enhancer of nonheme-iron absorption. This is, however, again, outside the realm of meat curing and will have to happen in a different format.
The document from Wits mentions Pryde (1931) who quotes Herbert Spencer’s “ Study of Sociology (1880) ” concerning the early use of Citrus juices for the cure of scurvy:— “ It was in 1593 that sour juices were first recommended by Albertus, and in the same year Sir R. Hawkins cured his crew of scurvy by lemon-juice. In 1600, Commodore Lancaster, who took out the first squadron of the East India Company’s ships, kept the crew of his own ship in perfect health by lemon-juice, while the crews of the accompanying ships were so disabled that he had to send his own men on board to set sails. In 1636, this remedy was again recommended in medical works on scurvy. Admiral Wagner, commanding our fleet in the Baltic in 1726, once more showed it to be a specific. John Woodall, in 1628, used lemon-juice for the treatment of scurvy, and gave a full description in his ‘ Viaticum, being the Pathway to the Surgeon’s Chest’ (1628). “ The virtues of orange juice for scurvy dates back to 1671, when Venette, considered orange and lemon juice contained ‘ something which was directly opposed to the causes of scurvy,’ cited by Browning (1931). ‘ Vitamins’ : Special Report Series No. 167 (1932) : Medical Research Council, London, contains an interesting account of the experience of Lind (1747). Lind had twelve scurvy patients on his hands on board the “Salisbury” at sea, on the 20th May, 1747. ‘ They all in general had putrid gums, the spots, and lassitude, with weakness of the knees ………….and had one diet common to all, viz., water-gruel sweetened with sugar in the morning, fresh mutton-broth often times for dinner, at other times light-puddings, boiled biscuit and sugar, etc., and for supper barley and raisins, rice and currants, sago and wine or the like.’ Lind treated two each of his patients with (1) cyder, (2) elixir vitriol, (3) vinegar, (4) sea-water, (5) an electary composed of garlic, mustard seed, radaphan, balsam of Peru, and gum myrrh, and (6) ‘two oranges and one lemon given them every day.’ Supplies of oranges and lemons lasted for six days. ‘ The consequence was, that the most sudden and visible good effects were perceived from the use of oranges and lemons; one of those who had taken them being at the end of six days fit for duty. The spots were not indeed quite off his body, nor his gums sound; but, without any other medicine, than a gargarism of elixir vitriol, he became quite healthy before we came to Plymouth, which was on the 16th June. The other (on the orange and lemon ration) was the best recovered of any in his position, and, being deemed pretty well, was appointed nurse to the rest of the sick.’ ” (historicalpapers.wits.ac.za)Since these early days of scurvy on land and sea, the juice of Citrus fruits has been always regarded as the more efficacious remedy, and modern science confirming their merits as the best and cheapest of anti-scorbutics. At sea, today, by an Order in Council (Statutory Rules and Orders, 1927, Merchant Shipping), provision is made for the issue of orange juice—concentrated orange juice containing not less than 70% of total solids—at the rate of I f fl. ozs. mixed with six times its volume of water.” (historicalpapers.wits.ac.za)
Ault, R. G., Baird, D. K., Carrington, H. C., Haworth, W. N., Herbert, R., Hirst, E. L., Percival, E. G. V., Smith, F. and Stacey, M.. 1933. Synthesis of d– and of l-ascorbic acid and of analogous substances. J. Chem. Soc., 1933, 1419- 423, http://dx.doi.org/10.1039/JR9330001419
Baird, D. K., Haworth, W. N., Herbert, R. W., Hirst, E. L., Smith, F. and Stacey, M.. Ascorbic acid and synthetic analogues. J. Chem. Soc., 1934, 62-67, http://dx.doi.org/10.1039/JR9340000062.
Daniel, E. P., Munsell, H. E.. 1937. Vitamin Content of Foods. United States Department of Agriculture.
Maurer, K. and Bruno Schiedt, B.. (August 2, 1933) “Die Darstellung einer Säure C6H8O6 aus Glucose, die in ihrer Reduktionskraft der Ascorbinsäure gleicht (Vorläuf. Mitteil.)” (The preparation of an acid C6H8O6 from glucose, which equals ascorbic acid in its reducing power (preliminary report)), Berichte der deutschen chemischen Gesellschaft, 66 (8): 1054-1057. (http://onlinelibrary.wiley.com/doi/10.1002/cber.19330660807/pdf)
Maurer, K. and Schiedt, B.. (July 4, 1934) “Zur Darstellung des Iso-Vitamins C (d-Arabo-ascorbinsäure) (II. Mitteil.)” (On the preparation of iso-vitamin C (d-arabo-ascorbic acid) (2nd report)), Berichte der deutschen chemischen Gesellschaft, 67 (7) : 1239–1241
Takahashi, T., Mitsumoto, M. and Kayahori, H.. 1960. Production of D-Araboascorbic Acid by Penicillium. Naturevolume188, pages411–412 (29 October 1960)
Takahashi, T., Mitsumoto, M. and Kayahori, H.. 1960. The Production of D-Araboascorbic Acid by a Mold. Bull. Agr. Chem. Soc. Japan, Vol. 24, No. 5, p. 533 – 534, 1960.
The Sal Ammoniac Project
by: Eben van Tonder
4 February 2018
Studying curing salts from antiquity, I realised that sal ammoniac (ammonium chloride) was probably the first curing agent with universal appeal and traded across Europe, China, India, and Africa.
This remarkable mineral was naturally harvested from caves in mountains from Tibet and the Smoky Mountains of Turfan to the mountains surrounding Samarkand. From the Siva Oasis in ancient Lybia where the famed temple of Amun stands to, I am sure, the Danakil Desert in Ethiopia which has the same climate and volcanic vents found at the Smokey Mountains in Turfan. From these locations, it was traded into Saltsburg, the heart of Europe, the Mediterranean, and eastern China and India.
On paper the ammonium chloride (NH4Cl) should cure meat as fast and effectively as saltpeter, but what can we learn from interacting with the salt in real life?
I embarked on this fascinating journey after Christmas 2017 with two very small experiments to start learning more about the practical application of this ancient curing salt.
Does it drop the temperature of meat sufficiently to assist in curing?
The first experiment was to see if it would drop the temperature sufficiently to assist in curing as I have speculated in earlier articles. Without running the equations, I wanted to see how much it actually reduces the temperature of water for a given quantity of the salt. Why think if you can test? So, one day after work, I bought a small mg scale from a Chinese retailer in Kraaifonteion, picked up a bag of ammonium chloride and had some fun at home. I recruited an assistant (my daughter, Lauren), took some shot glasses out of the cupboard and the science experiment was underway.
To test it, I wanted to see what happened in water first.
The water temperature started at 22.6 deg C.
We added 1% ammonium chloride and it dropped to 22.3 deg C.
4% – 20.4 deg C.
6% – 19,2 deg C.
8% – 18.1 deg C.
10% – 17.5 deg C.
20% – 11 deg C.
30% – 6.9 deg C.
I tested it on meat and rubbed as much of it on the meat to cover the meat with a thick crust, but this had a negligible impact on the meat temperature. I was, therefore, wrong to suspect that applying sal ammoniac would reduce the meat temperature. Of course, it could have been used as a coolant to place a container with meat being cured into, in order to reduce the temperature of the meat.
Next was the even more fundamental question of its efficacy to cure meat. Will it preserve the meat and does the characteristic cooked/ cured pinkish/ reddish colour develop? What does it taste like? (matters of its toxicity will be considered later).
Does it cure meat?
On 27 Dec 2017, I selected a belly which I cut into three pieces. One I treated with sal ammoniac (2%) and sodium chloride (4%), sample 1. The second, I treated with sal ammoniac (0.125%) and salt (1.55%), sample 2. Sample 3, I treated with Prague powder (0.25%) and salt (1.5%).
Sample 1, the 2% sal ammoniac, 4% NaCl, I placed in a plastic bag with no vacuum. Sample 2, the 1.5% NaCl and 0.125% ammonium chloride, I vacuumed sealed. The 1.5% NaCl and 0.25% Prague powder, I sealed in a vacuum bag. The meat temperature was at 7 deg C.
The sodium chloride and sal ammoniac were weighed, mixed together and hand-rubbed onto the meat. It was stored in a chiller where a 7 deg C temperature was maintained throughout the month of testing and observations.
By 10 Jan, 14 days after curing, the ammonium chloride treated meat, in my estimation, had the brightest cured colour.
Here is sample 1, the ammonium chloride that was not vacuum packed.
Sample 2, the vacuum packed ammonium chloride.
By 15 Jan 2018, 19 days after the test began, the meat cured with ammonium chloride changed colour. This change of colour was first observed two days earlier on the 13th of Jan, but by the 15th, it was clearly noticeable.
Compare the bright colour of the nitrite cured sample and the vacuum packed ammonium chloride sample.
And the colour of the non-vacuumed ammonium chloride cured sample.
On 26 Jan 2018, I opened the vacuum packed samples.
Sample 1. The meat exposed to the air turned into wooden brown, due to oxidation, no doubt. Despite the colour on the outside browning, the meat in the center was still pink and the meat had a pleasant fresh meat smell. The fresh meat smell was so distinct that I called in a colleague to confirm the smell in case I was being tricked by my senses by making me smell what I would like to smell. This shows that in this instance, 2% sal ammoniac and 4% sodium chloride, not under vacuum, stored at 5 deg C was sufficient to preserve the meat very well for a month.
Sample 2. The vacuumed sealed meat cured with ammonium chloride maintained its dark purple colour, but darkened, probably due to a loss of vacuum over the month. Most interestingly, this sample was off, with a distinct foul smell. In this instance, 0.125% sal ammoniac and 1.5% sodium chloride was ineffective in preserving meat at 5 deg C, and vacuum packed over a month.
Sample 3. The vacuum sealed meat treated with sodium nitrite retained its bright pinkish/ reddish colour. The sample of nitrite cured meat had a similar foul smell as the ammonium chloride treated meat, even though less – about half in intensity. In this instance, the 0.25 Prague Powder and 1.5% sodium nitrate was not sufficient to preserve meat under vacuum, at a 7 deg C temperature.
I put the samples in water for three hours to draw out excess salt.
On 28 Jan 2018, I sliced the sal ammoniac sample which worked. Exactly a month after I started the curing process.
The meat was smoked and heat treated to a core temperature of 55 deg C for three hours with beach wood. It was then sliced and vacuum packed. The vacuum packed 4% sodium chloride and 2% sal ammoniac cured meat presented a beautiful cured colour after a month of curing. We fried it up and tasted it. The meat had a slightly, but very distinct tangy taste which was strange, but not unpleasant at all.
Sal ammoniac, in our estimation, would definitely be acceptable as a meat curing agent at a concentration level of 2% with a little bit of salt, even in the absence of a vacuum. The taste is not bad (definitely edible) and the preserving power impressive.
Ignoring the matter of toxicity, applying sal ammoniac at a 2% ratio to the meat would not have directly impacted the meat temperature contrary to what I expected. What the ancients, of course, could have done, was to add 30% sal ammoniac to water, reduce the water temperature to almost 5 deg C and place the container with the the meat being cured into this as a cooling device to assist in the preservation of the meat for the two weeks it took to cure properly, if daytime temperatures rose high and a cool area could not be found to store the meat.
It will be interesting to test it against potassium nitrate and sodium nitrite at the same concentration levels.